diff --git a/.github/workflows/gemini-automated-issue-triage.yml b/.github/workflows/gemini-automated-issue-triage.yml index d2d94dfb..7dbc0a56 100644 --- a/.github/workflows/gemini-automated-issue-triage.yml +++ b/.github/workflows/gemini-automated-issue-triage.yml @@ -21,6 +21,8 @@ jobs: uses: QwenLM/qwen-code-action@5fd6818d04d64e87d255ee4d5f77995e32fbf4c2 env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} + ISSUE_TITLE: ${{ github.event.issue.title }} + ISSUE_BODY: ${{ github.event.issue.body }} with: version: 0.0.4 OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} @@ -34,17 +36,97 @@ jobs: "sandbox": false } prompt: | - You are an issue triage assistant. Analyze the current GitHub issue and apply the most appropriate existing labels. - + You are an issue triage assistant. Analyze the current GitHub issues apply the most appropriate existing labels. Do not remove labels titled help wanted or good first issue. Steps: 1. Run: `gh label list --repo ${{ github.repository }} --limit 100` to get all available labels. - 2. Review the issue title and body provided in the environment variables. - 3. Select the most relevant labels from the existing labels, focusing on kind/*, area/*, and priority/*. - 4. Apply the selected labels to this issue using: `gh issue edit ${{ github.event.issue.number }} --repo ${{ github.repository }} --add-label "label1,label2"` - 5. If the issue has a "status/need-triage" label, remove it after applying the appropriate labels: `gh issue edit ${{ github.event.issue.number }} --repo ${{ github.repository }} --remove-label "status/need-triage"` - + 2. Review the issue title, body and any comments provided in the environment variables. + 3. Ignore any existing priorities or tags on the issue. Just report your findings. + 4. Select the most relevant labels from the existing labels, focusing on kind/*, area/*, sub-area/* and priority/*. For area/* and kind/* limit yourself to only the single most applicable label in each case. + 6. Apply the selected labels to this issue using: `gh issue edit ${{ github.event.issue.number }} --repo ${{ github.repository }} --add-label "label1,label2"` + 7. For each issue please check if CLI version is present, this is usually in the output of the /about command and will look like 0.1.5 for anything more than 6 versions older than the most recent should add the status/need-retesting label + 8. If you see that the issue doesn’t look like it has sufficient information recommend the status/need-information label + 9. Use Area definitions mentioned below to help you narrow down issues Guidelines: - Only use labels that already exist in the repository. - Do not add comments or modify the issue content. - Triage only the current issue. - - Assign all applicable kind/*, area/*, and priority/* labels based on the issue content. + - Apply only one area/ label + - Apply only one kind/ label + - Apply all applicable sub-area/* and priority/* labels based on the issue content. It's ok to have multiple of these. + - Once you categorize the issue if it needs information bump down the priority by 1 eg.. a p0 would become a p1 a p1 would become a p2. P2 and P3 can stay as is in this scenario. + Categorization Guidelines: + P0: Critical / Blocker + - A P0 bug is a catastrophic failure that demands immediate attention. It represents a complete showstopper for a significant portion of users or for the development process itself. + Impact: + - Blocks development or testing for the entire team. + - Major security vulnerability that could compromise user data or system integrity. + - Causes data loss or corruption with no workaround. + - Crashes the application or makes a core feature completely unusable for all or most users in a production environment. Will it cause severe quality degration? Is it preventing contributors from contributing to the repository or is it a release blocker? + Qualifier: Is the main function of the software broken? + Example: The gemini auth login command fails with an unrecoverable error, preventing any user from authenticating and using the rest of the CLI. + P1: High + - A P1 bug is a serious issue that significantly degrades the user experience or impacts a core feature. While not a complete blocker, it's a major problem that needs a fast resolution. Feature requests are almost never P1. + Impact: + - A core feature is broken or behaving incorrectly for a large number of users or large number of use cases. + - Review the bug details and comments to try figure out if this issue affects a large set of use cases or if it's a narrow set of use cases. + - Severe performance degradation making the application frustratingly slow. + - No straightforward workaround exists, or the workaround is difficult and non-obvious. + Qualifier: Is a key feature unusable or giving very wrong results? + Example: The gemini -p "..." command consistently returns a malformed JSON response or an empty result, making the CLI's primary generation feature unreliable. + P2: Medium + - A P2 bug is a moderately impactful issue. It's a noticeable problem but doesn't prevent the use of the software's main functionality. + Impact: + - Affects a non-critical feature or a smaller, specific subset of users. + - An inconvenient but functional workaround is available and easy to execute. + - Noticeable UI/UX problems that don't break functionality but look unprofessional (e.g., elements are misaligned or overlapping). + Qualifier: Is it an annoying but non-blocking problem? + Example: An error message is unclear or contains a typo, causing user confusion but not halting their workflow. + P3: Low + - A P3 bug is a minor, low-impact issue that is trivial or cosmetic. It has little to no effect on the overall functionality of the application. + Impact: + - Minor cosmetic issues like color inconsistencies, typos in documentation, or slight alignment problems on a non-critical page. + - An edge-case bug that is very difficult to reproduce and affects a tiny fraction of users. + Qualifier: Is it a "nice-to-fix" issue? + Example: Spelling mistakes etc. + Things you should know: + - If users are talking about issues where the model gets downgraded from pro to flash then i want you to categorize that as a performance issue + - This product is designed to use different models eg.. using pro, downgrading to flash etc. when users report that they dont expect the model to change those would be categorized as feature requests. + Definition of Areas + area/ux: + - Issues concerning user-facing elements like command usability, interactive features, help docs, and perceived performance. + - I am seeing my screen flicker when using Gemini CLI + - I am seeing the output malformed + - Theme changes aren't taking effect + - My keyboard inputs arent' being recognzied + area/platform: + - Issues related to installation, packaging, OS compatibility (Windows, macOS, Linux), and the underlying CLI framework. + area/background: Issues related to long-running background tasks, daemons, and autonomous or proactive agent features. + area/models: + - i am not getting a response that is reasonable or expected. this can include things like + - I am calling a tool and the tool is not performing as expected. + - i am expecting a tool to be called and it is not getting called , + - Including experience when using + - built-in tools (e.g., web search, code interpreter, read file, writefile, etc..), + - Function calling issues should be under this area + - i am getting responses from the model that are malformed. + - Issues concerning Gemini quality of response and inference, + - Issues talking about unnecessary token consumption. + - Issues talking about Model getting stuck in a loop be watchful as this could be the root cause for issues that otherwise seem like model performance issues. + - Memory compression + - unexpected responses, + - poor quality of generated code + area/tools: + - These are primarily issues related to Model Context Protocol + - These are issues that mention MCP support + - feature requests asking for support for new tools. + area/core: Issues with fundamental components like command parsing, configuration management, session state, and the main API client logic. Introducing multi-modality + area/contribution: Issues related to improving the developer contribution experience, such as CI/CD pipelines, build scripts, and test automation infrastructure. + area/authentication: Issues related to user identity, login flows, API key handling, credential storage, and access token management, unable to sign in selecting wrong authentication path etc.. + area/security-privacy: Issues concerning vulnerability patching, dependency security, data sanitization, privacy controls, and preventing unauthorized data access. + area/extensibility: Issues related to the plugin system, extension APIs, or making the CLI's functionality available in other applications, github actions, ide support etc.. + area/performance: Issues focused on model performance + - Issues with running out of capacity, + - 429 errors etc.. + - could also pertain to latency, + - other general software performance like, memory usage, CPU consumption, and algorithmic efficiency. + - Switching models from one to the other unexpectedly. diff --git a/.github/workflows/gemini-scheduled-issue-triage.yml b/.github/workflows/gemini-scheduled-issue-triage.yml index 68a5d102..10a1ab97 100644 --- a/.github/workflows/gemini-scheduled-issue-triage.yml +++ b/.github/workflows/gemini-scheduled-issue-triage.yml @@ -52,38 +52,129 @@ jobs: "run_shell_command(echo)", "run_shell_command(gh label list)", "run_shell_command(gh issue edit)", - "run_shell_command(gh issue list)" + "run_shell_command(gh issue list)", + "run_shell_command(gh issue view)" ], "sandbox": false } prompt: | - You are an issue triage assistant. Analyze issues and apply appropriate labels ONE AT A TIME. - - Repository: ${{ github.repository }} - + You are an issue triage assistant. Analyze the current GitHub issues apply the most appropriate existing labels. Steps: - 1. Run: `gh label list --repo ${{ github.repository }} --limit 100` to see available labels + 1. Run: `gh label list --repo ${{ github.repository }} --limit 100` to get all available labels. 2. Check environment variable for issues to triage: $ISSUES_TO_TRIAGE (JSON array of issues) - 3. Parse the JSON array from step 2 and for EACH INDIVIDUAL issue, apply appropriate labels using separate commands: - - `gh issue edit ISSUE_NUMBER --repo ${{ github.repository }} --add-label "label1"` - - `gh issue edit ISSUE_NUMBER --repo ${{ github.repository }} --add-label "label2"` - - Continue for each label separately - - IMPORTANT: Label each issue individually, one command per issue, one label at a time if needed. - - Guidelines: - - Only use existing repository labels from step 1 - - Do not add comments to issues - - Triage each issue independently based on title and body content - - Focus on applying: kind/* (bug/enhancement/documentation), area/* (core/cli/testing/windows), and priority/* labels - - If an issue has insufficient information, consider applying "status/need-information" - - After applying appropriate labels to an issue, remove the "status/need-triage" label if present: `gh issue edit ISSUE_NUMBER --repo ${{ github.repository }} --remove-label "status/need-triage"` - - Execute one `gh issue edit` command per issue, wait for success before proceeding to the next - - Example triage logic: - - Issues with "bug", "error", "broken" → kind/bug - - Issues with "feature", "enhancement", "improve" → kind/enhancement - - Issues about Windows/performance → area/windows, area/performance - - Critical bugs → priority/p0, other bugs → priority/p1, enhancements → priority/p2 - + 3. Review the issue title, body and any comments provided in the environment variables. + 4. Ignore any existing priorities or tags on the issue. + 5. Select the most relevant labels from the existing labels, focusing on kind/*, area/*, sub-area/* and priority/*. + 6. Get the list of labels already on the issue using `gh issue view ISSUE_NUMBER --repo ${{ github.repository }} --json labels -t '{{range .labels}}{{.name}}{{"\n"}}{{end}}' + 7. For area/* and kind/* limit yourself to only the single most applicable label in each case. + 8. Give me a single short paragraph about why you are selecting each label in the process. use the format Issue ID: , Title, Label applied:, Label removed, ovearll explanation + 9. Parse the JSON array from step 2 and for EACH INDIVIDUAL issue, apply appropriate labels using separate commands: + - `gh issue edit ISSUE_NUMBER --repo ${{ github.repository }} --add-label "label1"` + - `gh issue edit ISSUE_NUMBER --repo ${{ github.repository }} --add-label "label2"` + - Continue for each label separately + - IMPORTANT: Label each issue individually, one command per issue, one label at a time if needed. + - Make sure after you apply labels there is only one area/* and one kind/* label per issue. + - To do this look for labels found in step 6 that no longer apply remove them one at a time using + - `gh issue edit ISSUE_NUMBER --repo ${{ github.repository }} --remove-label "label-name1"` + - `gh issue edit ISSUE_NUMBER --repo ${{ github.repository }} --remove-label "label-name2"` + - IMPORTANT: Remove each label one at a time, one command per issue if needed. + 10. For each issue please check if CLI version is present, this is usually in the output of the /about command and will look like 0.1.5 + - Anything more than 6 versions older than the most recent should add the status/need-retesting label + 11. If you see that the issue doesn’t look like it has sufficient information recommend the status/need-information label + - After applying appropriate labels to an issue, remove the "status/need-triage" label if present: `gh issue edit ISSUE_NUMBER --repo ${{ github.repository }} --remove-label "status/need-triage"` + - Execute one `gh issue edit` command per issue, wait for success before proceeding to the next Process each issue sequentially and confirm each labeling operation before moving to the next issue. + Guidelines: + - Only use labels that already exist in the repository. + - Do not add comments or modify the issue content. + - Do not remove labels titled help wanted or good first issue. + - Triage only the current issue. + - Apply only one area/ label + - Apply only one kind/ label (Do not apply kind/duplicate or kind/parent-issue) + - Apply all applicable sub-area/* and priority/* labels based on the issue content. It's ok to have multiple of these. + - Once you categorize the issue if it needs information bump down the priority by 1 eg.. a p0 would become a p1 a p1 would become a p2. P2 and P3 can stay as is in this scenario. + Categorization Guidelines: + P0: Critical / Blocker + - A P0 bug is a catastrophic failure that demands immediate attention. It represents a complete showstopper for a significant portion of users or for the development process itself. + Impact: + - Blocks development or testing for the entire team. + - Major security vulnerability that could compromise user data or system integrity. + - Causes data loss or corruption with no workaround. + - Crashes the application or makes a core feature completely unusable for all or most users in a production environment. Will it cause severe quality degration? + - Is it preventing contributors from contributing to the repository or is it a release blocker? + Qualifier: Is the main function of the software broken? + Example: The gemini auth login command fails with an unrecoverable error, preventing any user from authenticating and using the rest of the CLI. + P1: High + - A P1 bug is a serious issue that significantly degrades the user experience or impacts a core feature. While not a complete blocker, it's a major problem that needs a fast resolution. + - Feature requests are almost never P1. + Impact: + - A core feature is broken or behaving incorrectly for a large number of users or large number of use cases. + - Review the bug details and comments to try figure out if this issue affects a large set of use cases or if it's a narrow set of use cases. + - Severe performance degradation making the application frustratingly slow. + - No straightforward workaround exists, or the workaround is difficult and non-obvious. + Qualifier: Is a key feature unusable or giving very wrong results? + Example: The gemini -p "..." command consistently returns a malformed JSON response or an empty result, making the CLI's primary generation feature unreliable. + P2: Medium + - A P2 bug is a moderately impactful issue. It's a noticeable problem but doesn't prevent the use of the software's main functionality. + Impact: + - Affects a non-critical feature or a smaller, specific subset of users. + - An inconvenient but functional workaround is available and easy to execute. + - Noticeable UI/UX problems that don't break functionality but look unprofessional (e.g., elements are misaligned or overlapping). + Qualifier: Is it an annoying but non-blocking problem? + Example: An error message is unclear or contains a typo, causing user confusion but not halting their workflow. + P3: Low + - A P3 bug is a minor, low-impact issue that is trivial or cosmetic. It has little to no effect on the overall functionality of the application. + Impact: + - Minor cosmetic issues like color inconsistencies, typos in documentation, or slight alignment problems on a non-critical page. + - An edge-case bug that is very difficult to reproduce and affects a tiny fraction of users. + Qualifier: Is it a "nice-to-fix" issue? + Example: Spelling mistakes etc. + Additional Context: + - If users are talking about issues where the model gets downgraded from pro to flash then i want you to categorize that as a performance issue + - This product is designed to use different models eg.. using pro, downgrading to flash etc. + - When users report that they dont expect the model to change those would be categorized as feature requests. + Definition of Areas + area/ux: + - Issues concerning user-facing elements like command usability, interactive features, help docs, and perceived performance. + - I am seeing my screen flicker when using Gemini CLI + - I am seeing the output malformed + - Theme changes aren't taking effect + - My keyboard inputs arent' being recognzied + area/platform: + - Issues related to installation, packaging, OS compatibility (Windows, macOS, Linux), and the underlying CLI framework. + area/background: Issues related to long-running background tasks, daemons, and autonomous or proactive agent features. + area/models: + - i am not getting a response that is reasonable or expected. this can include things like + - I am calling a tool and the tool is not performing as expected. + - i am expecting a tool to be called and it is not getting called , + - Including experience when using + - built-in tools (e.g., web search, code interpreter, read file, writefile, etc..), + - Function calling issues should be under this area + - i am getting responses from the model that are malformed. + - Issues concerning Gemini quality of response and inference, + - Issues talking about unnecessary token consumption. + - Issues talking about Model getting stuck in a loop be watchful as this could be the root cause for issues that otherwise seem like model performance issues. + - Memory compression + - unexpected responses, + - poor quality of generated code + area/tools: + - These are primarily issues related to Model Context Protocol + - These are issues that mention MCP support + - feature requests asking for support for new tools. + area/core: + - Issues with fundamental components like command parsing, configuration management, session state, and the main API client logic. Introducing multi-modality + area/contribution: + - Issues related to improving the developer contribution experience, such as CI/CD pipelines, build scripts, and test automation infrastructure. + area/authentication: + - Issues related to user identity, login flows, API key handling, credential storage, and access token management, unable to sign in selecting wrong authentication path etc.. + area/security-privacy: + - Issues concerning vulnerability patching, dependency security, data sanitization, privacy controls, and preventing unauthorized data access. + area/extensibility: + - Issues related to the plugin system, extension APIs, or making the CLI's functionality available in other applications, github actions, ide support etc.. + area/performance: + - Issues focused on model performance + - Issues with running out of capacity, + - 429 errors etc.. + - could also pertain to latency, + - other general software performance like, memory usage, CPU consumption, and algorithmic efficiency. + - Switching models from one to the other unexpectedly. diff --git a/.github/workflows/no-response.yml b/.github/workflows/no-response.yml new file mode 100644 index 00000000..3d3d8e7e --- /dev/null +++ b/.github/workflows/no-response.yml @@ -0,0 +1,32 @@ +name: No Response + +# Run as a daily cron at 1:45 AM +on: + schedule: + - cron: '45 1 * * *' + workflow_dispatch: {} + +jobs: + no-response: + runs-on: ubuntu-latest + if: ${{ github.repository == 'google-gemini/gemini-cli' }} + permissions: + issues: write + pull-requests: write + concurrency: + group: ${{ github.workflow }}-no-response + cancel-in-progress: true + steps: + - uses: actions/stale@5bef64f19d7facfb25b37b414482c7164d639639 + with: + repo-token: ${{ secrets.GITHUB_TOKEN }} + days-before-stale: -1 + days-before-close: 14 + stale-issue-label: 'status/need-information' + close-issue-message: > + This issue was marked as needing more information and has not received a response in 14 days. + Closing it for now. If you still face this problem, feel free to reopen with more details. Thank you! + stale-pr-label: 'status/need-information' + close-pr-message: > + This pull request was marked as needing more information and has had no updates in 14 days. + Closing it for now. You are welcome to reopen with the required info. Thanks for contributing! diff --git a/.github/workflows/stale.yml b/.github/workflows/stale.yml new file mode 100644 index 00000000..914e9d57 --- /dev/null +++ b/.github/workflows/stale.yml @@ -0,0 +1,38 @@ +name: Mark stale issues and pull requests + +# Run as a daily cron at 1:30 AM +on: + schedule: + - cron: '30 1 * * *' + workflow_dispatch: {} + +jobs: + stale: + runs-on: ubuntu-latest + if: ${{ github.repository == 'google-gemini/gemini-cli' }} + permissions: + issues: write + pull-requests: write + concurrency: + group: ${{ github.workflow }}-stale + cancel-in-progress: true + steps: + - uses: actions/stale@5bef64f19d7facfb25b37b414482c7164d639639 + with: + repo-token: ${{ secrets.GITHUB_TOKEN }} + stale-issue-message: > + This issue has been automatically marked as stale due to 60 days of inactivity. + It will be closed in 14 days if no further activity occurs. + stale-pr-message: > + This pull request has been automatically marked as stale due to 60 days of inactivity. + It will be closed in 14 days if no further activity occurs. + close-issue-message: > + This issue has been closed due to 14 additional days of inactivity after being marked as stale. + If you believe this is still relevant, feel free to comment or reopen the issue. Thank you! + close-pr-message: > + This pull request has been closed due to 14 additional days of inactivity after being marked as stale. + If this is still relevant, you are welcome to reopen or leave a comment. Thanks for contributing! + days-before-stale: 60 + days-before-close: 14 + exempt-issue-labels: pinned,security + exempt-pr-labels: pinned,security diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 4bba5b5e..6c934f23 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -242,6 +242,8 @@ To hit a breakpoint inside the sandbox container run: DEBUG=1 gemini ``` +**Note:** If you have `DEBUG=true` in a project's `.env` file, it won't affect gemini-cli due to automatic exclusion. Use `.gemini/.env` files for gemini-cli specific debug settings. + ### React DevTools To debug the CLI's React-based UI, you can use React DevTools. Ink, the library used for the CLI's interface, is compatible with React DevTools version 4.x. diff --git a/SECURITY.md b/SECURITY.md new file mode 100644 index 00000000..226310c2 --- /dev/null +++ b/SECURITY.md @@ -0,0 +1,8 @@ +# Reporting Security Issues + +To report a security issue, please use [https://g.co/vulnz](https://g.co/vulnz). +We use g.co/vulnz for our intake, and do coordination and disclosure here on +GitHub (including using GitHub Security Advisory). The Google Security Team will +respond within 5 working days of your report on g.co/vulnz. + +[GitHub Security Advisory]: https://github.com/google-gemini/gemini-cli/security/advisories diff --git a/docs/cli/authentication.md b/docs/cli/authentication.md index e73dff53..5c4ea597 100644 --- a/docs/cli/authentication.md +++ b/docs/cli/authentication.md @@ -92,6 +92,8 @@ The Gemini CLI requires you to authenticate with Google's AI services. On initia You can create a **`.gemini/.env`** file in your project directory or in your home directory. Creating a plain **`.env`** file also works, but `.gemini/.env` is recommended to keep Gemini variables isolated from other tools. +**Important:** Some environment variables (like `DEBUG` and `DEBUG_MODE`) are automatically excluded from project `.env` files to prevent interference with gemini-cli behavior. Use `.gemini/.env` files for gemini-cli specific variables. + Gemini CLI automatically loads environment variables from the **first** `.env` file it finds, using the following search order: 1. Starting in the **current directory** and moving upward toward `/`, for each directory it checks: diff --git a/docs/cli/commands.md b/docs/cli/commands.md index 7bcb0acf..d2f9854d 100644 --- a/docs/cli/commands.md +++ b/docs/cli/commands.md @@ -17,6 +17,11 @@ Slash commands provide meta-level control over the CLI itself. - **`save`** - **Description:** Saves the current conversation history. You must add a `` for identifying the conversation state. - **Usage:** `/chat save ` + - **Details on Checkpoint Location:** The default locations for saved chat checkpoints are: + - Linux/macOS: `~/.config/google-generative-ai/checkpoints/` + - Windows: `C:\Users\\AppData\Roaming\google-generative-ai\checkpoints\` + - When you run `/chat list`, the CLI only scans these specific directories to find available checkpoints. + - **Note:** These checkpoints are for manually saving and resuming conversation states. For automatic checkpoints created before file modifications, see the [Checkpointing documentation](../checkpointing.md). - **`resume`** - **Description:** Resumes a conversation from a previous save. - **Usage:** `/chat resume ` @@ -33,6 +38,17 @@ Slash commands provide meta-level control over the CLI itself. - **`/copy`** - **Description:** Copies the last output produced by Qwen Code to your clipboard, for easy sharing or reuse. +- **`/directory`** (or **`/dir`**) + - **Description:** Manage workspace directories for multi-directory support. + - **Sub-commands:** + - **`add`**: + - **Description:** Add a directory to the workspace. The path can be absolute or relative to the current working directory. Moreover, the reference from home directory is supported as well. + - **Usage:** `/directory add ,` + - **Note:** Disabled in restrictive sandbox profiles. If you're using that, use `--include-directories` when starting the session instead. + - **`show`**: + - **Description:** Display all directories added by `/direcotry add` and `--include-directories`. + - **Usage:** `/directory show` + - **`/editor`** - **Description:** Open a dialog for selecting supported editors. @@ -106,6 +122,9 @@ Slash commands provide meta-level control over the CLI itself. - **Persistent setting:** Vim mode preference is saved to `~/.gemini/settings.json` and restored between sessions - **Status indicator:** When enabled, shows `[NORMAL]` or `[INSERT]` in the footer +- **`/init`** + - **Description:** To help users easily create a `GEMINI.md` file, this command analyzes the current directory and generates a tailored context file, making it simpler for them to provide project-specific instructions to the Gemini agent. + ### Custom Commands For a quick start, see the [example](#example-a-pure-function-refactoring-command) below. diff --git a/docs/cli/configuration.md b/docs/cli/configuration.md index 3977bfc7..52824519 100644 --- a/docs/cli/configuration.md +++ b/docs/cli/configuration.md @@ -240,6 +240,14 @@ In addition to a project settings file, a project's `.gemini` directory can cont } ``` +- **`excludedProjectEnvVars`** (array of strings): + - **Description:** Specifies environment variables that should be excluded from being loaded from project `.env` files. This prevents project-specific environment variables (like `DEBUG=true`) from interfering with gemini-cli behavior. Variables from `.gemini/.env` files are never excluded. + - **Default:** `["DEBUG", "DEBUG_MODE"]` + - **Example:** + ```json + "excludedProjectEnvVars": ["DEBUG", "DEBUG_MODE", "NODE_ENV"] + ``` + ### Example `settings.json`: ```json @@ -271,7 +279,8 @@ In addition to a project settings file, a project's `.gemini` directory can cont "run_shell_command": { "tokenBudget": 100 } - } + }, + "excludedProjectEnvVars": ["DEBUG", "DEBUG_MODE", "NODE_ENV"] } ``` @@ -293,6 +302,8 @@ The CLI automatically loads environment variables from an `.env` file. The loadi 2. If not found, it searches upwards in parent directories until it finds an `.env` file or reaches the project root (identified by a `.git` folder) or the home directory. 3. If still not found, it looks for `~/.env` (in the user's home directory). +**Environment Variable Exclusion:** Some environment variables (like `DEBUG` and `DEBUG_MODE`) are automatically excluded from being loaded from project `.env` files to prevent interference with gemini-cli behavior. Variables from `.gemini/.env` files are never excluded. You can customize this behavior using the `excludedProjectEnvVars` setting in your `settings.json` file. + - **`GEMINI_API_KEY`** (Required): - Your API key for the Gemini API. - **Crucial for operation.** The CLI will not function without it. @@ -332,6 +343,7 @@ The CLI automatically loads environment variables from an `.env` file. The loadi - ``: Uses a custom profile. To define a custom profile, create a file named `sandbox-macos-.sb` in your project's `.qwen/` directory (e.g., `my-project/.qwen/sandbox-macos-custom.sb`). - **`DEBUG` or `DEBUG_MODE`** (often used by underlying libraries or the CLI itself): - Set to `true` or `1` to enable verbose debug logging, which can be helpful for troubleshooting. + - **Note:** These variables are automatically excluded from project `.env` files by default to prevent interference with gemini-cli behavior. Use `.gemini/.env` files if you need to set these for gemini-cli specifically. - **`NO_COLOR`**: - Set to any value to disable all color output in the CLI. - **`CLI_TITLE`**: @@ -387,6 +399,11 @@ Arguments passed directly when running the CLI can override other configurations - **`--proxy`**: - Sets the proxy for the CLI. - Example: `--proxy http://localhost:7890`. +- **`--include-directories `**: + - Includes additional directories in the workspace for multi-directory support. + - Can be specified multiple times or as comma-separated values. + - 5 directories can be added at maximum. + - Example: `--include-directories /path/to/project1,/path/to/project2` or `--include-directories /path/to/project1 --include-directories /path/to/project2` - **`--version`**: - Displays the version of the CLI. - **`--openai-logging`**: @@ -444,6 +461,7 @@ This example demonstrates how you can provide general project context, specific - Location: The CLI also scans for the configured context file in subdirectories _below_ the current working directory (respecting common ignore patterns like `node_modules`, `.git`, etc.). The breadth of this search is limited to 200 directories by default, but can be configured with a `memoryDiscoveryMaxDirs` field in your `settings.json` file. - Scope: Allows for highly specific instructions relevant to a particular component, module, or subsection of your project. - **Concatenation & UI Indication:** The contents of all found context files are concatenated (with separators indicating their origin and path) and provided as part of the system prompt to the Gemini model. The CLI footer displays the count of loaded context files, giving you a quick visual cue about the active instructional context. +- **Importing Content:** You can modularize your context files by importing other Markdown files using the `@path/to/file.md` syntax. For more details, see the [Memory Import Processor documentation](../core/memport.md). - **Commands for Memory Management:** - Use `/memory refresh` to force a re-scan and reload of all context files from all configured locations. This updates the AI's instructional context. - Use `/memory show` to display the combined instructional context currently loaded, allowing you to verify the hierarchy and content being used by the AI. diff --git a/docs/cli/themes.md b/docs/cli/themes.md index df891868..25d6123c 100644 --- a/docs/cli/themes.md +++ b/docs/cli/themes.md @@ -58,7 +58,11 @@ Add a `customThemes` block to your user, project, or system `settings.json` file "AccentYellow": "#E5C07B", "AccentRed": "#E06C75", "Comment": "#5C6370", - "Gray": "#ABB2BF" + "Gray": "#ABB2BF", + "DiffAdded": "#A6E3A1", + "DiffRemoved": "#F38BA8", + "DiffModified": "#89B4FA", + "GradientColors": ["#4796E4", "#847ACE", "#C3677F"] } } } @@ -77,6 +81,9 @@ Add a `customThemes` block to your user, project, or system `settings.json` file - `AccentRed` - `Comment` - `Gray` +- `DiffAdded` (optional, for added lines in diffs) +- `DiffRemoved` (optional, for removed lines in diffs) +- `DiffModified` (optional, for modified lines in diffs) **Required Properties:** diff --git a/docs/core/memport.md b/docs/core/memport.md index cc6404e0..cc96aad3 100644 --- a/docs/core/memport.md +++ b/docs/core/memport.md @@ -1,18 +1,14 @@ # Memory Import Processor -The Memory Import Processor is a feature that allows you to modularize your GEMINI.md files by importing content from other markdown files using the `@file.md` syntax. +The Memory Import Processor is a feature that allows you to modularize your GEMINI.md files by importing content from other files using the `@file.md` syntax. ## Overview This feature enables you to break down large GEMINI.md files into smaller, more manageable components that can be reused across different contexts. The import processor supports both relative and absolute paths, with built-in safety features to prevent circular imports and ensure file access security. -## Important Limitations - -**This feature only supports `.md` (markdown) files.** Attempting to import files with other extensions (like `.txt`, `.json`, etc.) will result in a warning and the import will fail. - ## Syntax -Use the `@` symbol followed by the path to the markdown file you want to import: +Use the `@` symbol followed by the path to the file you want to import: ```markdown # Main GEMINI.md file @@ -96,24 +92,10 @@ The `validateImportPath` function ensures that imports are only allowed from spe ### Maximum Import Depth -To prevent infinite recursion, there's a configurable maximum import depth (default: 10 levels). +To prevent infinite recursion, there's a configurable maximum import depth (default: 5 levels). ## Error Handling -### Non-MD File Attempts - -If you try to import a non-markdown file, you'll see a warning: - -```markdown -@./instructions.txt -``` - -Console output: - -``` -[WARN] [ImportProcessor] Import processor only supports .md files. Attempting to import non-md file: ./instructions.txt. This will fail. -``` - ### Missing Files If a referenced file doesn't exist, the import will fail gracefully with an error comment in the output. @@ -122,6 +104,36 @@ If a referenced file doesn't exist, the import will fail gracefully with an erro Permission issues or other file system errors are handled gracefully with appropriate error messages. +## Code Region Detection + +The import processor uses the `marked` library to detect code blocks and inline code spans, ensuring that `@` imports inside these regions are properly ignored. This provides robust handling of nested code blocks and complex Markdown structures. + +## Import Tree Structure + +The processor returns an import tree that shows the hierarchy of imported files, similar to Claude's `/memory` feature. This helps users debug problems with their GEMINI.md files by showing which files were read and their import relationships. + +Example tree structure: + +``` +Memory Files + L project: GEMINI.md + L a.md + L b.md + L c.md + L d.md + L e.md + L f.md + L included.md +``` + +The tree preserves the order that files were imported and shows the complete import chain for debugging purposes. + +## Comparison to Claude Code's `/memory` (`claude.md`) Approach + +Claude Code's `/memory` feature (as seen in `claude.md`) produces a flat, linear document by concatenating all included files, always marking file boundaries with clear comments and path names. It does not explicitly present the import hierarchy, but the LLM receives all file contents and paths, which is sufficient for reconstructing the hierarchy if needed. + +Note: The import tree is mainly for clarity during development and has limited relevance to LLM consumption. + ## API Reference ### `processImports(content, basePath, debugMode?, importState?)` @@ -135,7 +147,25 @@ Processes import statements in GEMINI.md content. - `debugMode` (boolean, optional): Whether to enable debug logging (default: false) - `importState` (ImportState, optional): State tracking for circular import prevention -**Returns:** Promise - Processed content with imports resolved +**Returns:** Promise - Object containing processed content and import tree + +### `ProcessImportsResult` + +```typescript +interface ProcessImportsResult { + content: string; // The processed content with imports resolved + importTree: MemoryFile; // Tree structure showing the import hierarchy +} +``` + +### `MemoryFile` + +```typescript +interface MemoryFile { + path: string; // The file path + imports?: MemoryFile[]; // Direct imports, in the order they were imported +} +``` ### `validateImportPath(importPath, basePath, allowedDirectories)` @@ -149,6 +179,16 @@ Validates import paths to ensure they are safe and within allowed directories. **Returns:** boolean - Whether the import path is valid +### `findProjectRoot(startDir)` + +Finds the project root by searching for a `.git` directory upwards from the given start directory. Implemented as an **async** function using non-blocking file system APIs to avoid blocking the Node.js event loop. + +**Parameters:** + +- `startDir` (string): The directory to start searching from + +**Returns:** Promise - The project root directory (or the start directory if no `.git` is found) + ## Best Practices 1. **Use descriptive file names** for imported components @@ -161,7 +201,7 @@ Validates import paths to ensure they are safe and within allowed directories. ### Common Issues -1. **Import not working**: Check that the file exists and has a `.md` extension +1. **Import not working**: Check that the file exists and the path is correct 2. **Circular import warnings**: Review your import structure for circular references 3. **Permission errors**: Ensure the files are readable and within allowed directories 4. **Path resolution issues**: Use absolute paths if relative paths aren't resolving correctly diff --git a/docs/extension.md b/docs/extension.md index 0bdede0b..aa5d837a 100644 --- a/docs/extension.md +++ b/docs/extension.md @@ -33,10 +33,44 @@ The `gemini-extension.json` file contains the configuration for the extension. T } ``` -- `name`: The name of the extension. This is used to uniquely identify the extension. This should match the name of your extension directory. +- `name`: The name of the extension. This is used to uniquely identify the extension and for conflict resolution when extension commands have the same name as user or project commands. - `version`: The version of the extension. - `mcpServers`: A map of MCP servers to configure. The key is the name of the server, and the value is the server configuration. These servers will be loaded on startup just like MCP servers configured in a [`settings.json` file](./cli/configuration.md). If both an extension and a `settings.json` file configure an MCP server with the same name, the server defined in the `settings.json` file takes precedence. - `contextFileName`: The name of the file that contains the context for the extension. This will be used to load the context from the workspace. If this property is not used but a `GEMINI.md` file is present in your extension directory, then that file will be loaded. - `excludeTools`: An array of tool names to exclude from the model. You can also specify command-specific restrictions for tools that support it, like the `run_shell_command` tool. For example, `"excludeTools": ["run_shell_command(rm -rf)"]` will block the `rm -rf` command. When Gemini CLI starts, it loads all the extensions and merges their configurations. If there are any conflicts, the workspace configuration takes precedence. + +## Extension Commands + +Extensions can provide [custom commands](./cli/commands.md#custom-commands) by placing TOML files in a `commands/` subdirectory within the extension directory. These commands follow the same format as user and project custom commands and use standard naming conventions. + +### Example + +An extension named `gcp` with the following structure: + +``` +.gemini/extensions/gcp/ +├── gemini-extension.json +└── commands/ + ├── deploy.toml + └── gcs/ + └── sync.toml +``` + +Would provide these commands: + +- `/deploy` - Shows as `[gcp] Custom command from deploy.toml` in help +- `/gcs:sync` - Shows as `[gcp] Custom command from sync.toml` in help + +### Conflict Resolution + +Extension commands have the lowest precedence. When a conflict occurs with user or project commands: + +1. **No conflict**: Extension command uses its natural name (e.g., `/deploy`) +2. **With conflict**: Extension command is renamed with the extension prefix (e.g., `/gcp.deploy`) + +For example, if both a user and the `gcp` extension define a `deploy` command: + +- `/deploy` - Executes the user's deploy command +- `/gcp.deploy` - Executes the extension's deploy command (marked with `[gcp]` tag) diff --git a/docs/gemini-ignore.md b/docs/gemini-ignore.md new file mode 100644 index 00000000..8e8fdf20 --- /dev/null +++ b/docs/gemini-ignore.md @@ -0,0 +1,59 @@ +# Ignoring Files + +This document provides an overview of the Gemini Ignore (`.geminiignore`) feature of the Gemini CLI. + +The Gemini CLI includes the ability to automatically ignore files, similar to `.gitignore` (used by Git) and `.aiexclude` (used by Gemini Code Assist). Adding paths to your `.geminiignore` file will exclude them from tools that support this feature, although they will still be visible to other services (such as Git). + +## How it works + +When you add a path to your `.geminiignore` file, tools that respect this file will exclude matching files and directories from their operations. For example, when you use the [`read_many_files`](./tools/multi-file.md) command, any paths in your `.geminiignore` file will be automatically excluded. + +For the most part, `.geminiignore` follows the conventions of `.gitignore` files: + +- Blank lines and lines starting with `#` are ignored. +- Standard glob patterns are supported (such as `*`, `?`, and `[]`). +- Putting a `/` at the end will only match directories. +- Putting a `/` at the beginning anchors the path relative to the `.geminiignore` file. +- `!` negates a pattern. + +You can update your `.geminiignore` file at any time. To apply the changes, you must restart your Gemini CLI session. + +## How to use `.geminiignore` + +To enable `.geminiignore`: + +1. Create a file named `.geminiignore` in the root of your project directory. + +To add a file or directory to `.geminiignore`: + +1. Open your `.geminiignore` file. +2. Add the path or file you want to ignore, for example: `/archive/` or `apikeys.txt`. + +### `.geminiignore` examples + +You can use `.geminiignore` to ignore directories and files: + +``` +# Exclude your /packages/ directory and all subdirectories +/packages/ + +# Exclude your apikeys.txt file +apikeys.txt +``` + +You can use wildcards in your `.geminiignore` file with `*`: + +``` +# Exclude all .md files +*.md +``` + +Finally, you can exclude files and directories from exclusion with `!`: + +``` +# Exclude all .md files except README.md +*.md +!README.md +``` + +To remove paths from your `.geminiignore` file, delete the relevant lines. diff --git a/docs/issue-and-pr-automation.md b/docs/issue-and-pr-automation.md new file mode 100644 index 00000000..45a4bdfd --- /dev/null +++ b/docs/issue-and-pr-automation.md @@ -0,0 +1,84 @@ +# Automation and Triage Processes + +This document provides a detailed overview of the automated processes we use to manage and triage issues and pull requests. Our goal is to provide prompt feedback and ensure that contributions are reviewed and integrated efficiently. Understanding this automation will help you as a contributor know what to expect and how to best interact with our repository bots. + +## Guiding Principle: Issues and Pull Requests + +First and foremost, almost every Pull Request (PR) should be linked to a corresponding Issue. The issue describes the "what" and the "why" (the bug or feature), while the PR is the "how" (the implementation). This separation helps us track work, prioritize features, and maintain clear historical context. Our automation is built around this principle. + +--- + +## Detailed Automation Workflows + +Here is a breakdown of the specific automation workflows that run in our repository. + +### 1. When you open an Issue: `Automated Issue Triage` + +This is the first bot you will interact with when you create an issue. Its job is to perform an initial analysis and apply the correct labels. + +- **Workflow File**: `.github/workflows/gemini-automated-issue-triage.yml` +- **When it runs**: Immediately after an issue is created or reopened. +- **What it does**: + - It uses a Gemini model to analyze the issue's title and body against a detailed set of guidelines. + - **Applies one `area/*` label**: Categorizes the issue into a functional area of the project (e.g., `area/ux`, `area/models`, `area/platform`). + - **Applies one `kind/*` label**: Identifies the type of issue (e.g., `kind/bug`, `kind/enhancement`, `kind/question`). + - **Applies one `priority/*` label**: Assigns a priority from P0 (critical) to P3 (low) based on the described impact. + - **May apply `status/need-information`**: If the issue lacks critical details (like logs or reproduction steps), it will be flagged for more information. + - **May apply `status/need-retesting`**: If the issue references a CLI version that is more than six versions old, it will be flagged for retesting on a current version. +- **What you should do**: + - Fill out the issue template as completely as possible. The more detail you provide, the more accurate the triage will be. + - If the `status/need-information` label is added, please provide the requested details in a comment. + +### 2. When you open a Pull Request: `Continuous Integration (CI)` + +This workflow ensures that all changes meet our quality standards before they can be merged. + +- **Workflow File**: `.github/workflows/ci.yml` +- **When it runs**: On every push to a pull request. +- **What it does**: + - **Lint**: Checks that your code adheres to our project's formatting and style rules. + - **Test**: Runs our full suite of automated tests across macOS, Windows, and Linux, and on multiple Node.js versions. This is the most time-consuming part of the CI process. + - **Post Coverage Comment**: After all tests have successfully passed, a bot will post a comment on your PR. This comment provides a summary of how well your changes are covered by tests. +- **What you should do**: + - Ensure all CI checks pass. A green checkmark ✅ will appear next to your commit when everything is successful. + - If a check fails (a red "X" ❌), click the "Details" link next to the failed check to view the logs, identify the problem, and push a fix. + +### 3. Ongoing Triage for Pull Requests: `PR Auditing and Label Sync` + +This workflow runs periodically to ensure all open PRs are correctly linked to issues and have consistent labels. + +- **Workflow File**: `.github/workflows/gemini-scheduled-pr-triage.yml` +- **When it runs**: Every 15 minutes on all open pull requests. +- **What it does**: + - **Checks for a linked issue**: The bot scans your PR description for a keyword that links it to an issue (e.g., `Fixes #123`, `Closes #456`). + - **Adds `status/need-issue`**: If no linked issue is found, the bot will add the `status/need-issue` label to your PR. This is a clear signal that an issue needs to be created and linked. + - **Synchronizes labels**: If an issue _is_ linked, the bot ensures the PR's labels perfectly match the issue's labels. It will add any missing labels and remove any that don't belong, and it will remove the `status/need-issue` label if it was present. +- **What you should do**: + - **Always link your PR to an issue.** This is the most important step. Add a line like `Resolves #` to your PR description. + - This will ensure your PR is correctly categorized and moves through the review process smoothly. + +### 4. Ongoing Triage for Issues: `Scheduled Issue Triage` + +This is a fallback workflow to ensure that no issue gets missed by the triage process. + +- **Workflow File**: `.github/workflows/gemini-scheduled-issue-triage.yml` +- **When it runs**: Every hour on all open issues. +- **What it does**: + - It actively seeks out issues that either have no labels at all or still have the `status/need-triage` label. + - It then triggers the same powerful Gemini-based analysis as the initial triage bot to apply the correct labels. +- **What you should do**: + - You typically don't need to do anything. This workflow is a safety net to ensure every issue is eventually categorized, even if the initial triage fails. + +### 5. Release Automation + +This workflow handles the process of packaging and publishing new versions of the Gemini CLI. + +- **Workflow File**: `.github/workflows/release.yml` +- **When it runs**: On a daily schedule for "nightly" releases, and manually for official patch/minor releases. +- **What it does**: + - Automatically builds the project, bumps the version numbers, and publishes the packages to npm. + - Creates a corresponding release on GitHub with generated release notes. +- **What you should do**: + - As a contributor, you don't need to do anything for this process. You can be confident that once your PR is merged into the `main` branch, your changes will be included in the very next nightly release. + +We hope this detailed overview is helpful. If you have any questions about our automation or processes, please don't hesitate to ask! diff --git a/docs/keyboard-shortcuts.md b/docs/keyboard-shortcuts.md new file mode 100644 index 00000000..37e47045 --- /dev/null +++ b/docs/keyboard-shortcuts.md @@ -0,0 +1,62 @@ +# Gemini CLI Keyboard Shortcuts + +This document lists the available keyboard shortcuts in the Gemini CLI. + +## General + +| Shortcut | Description | +| -------- | --------------------------------------------------------------------------------------------------------------------- | +| `Esc` | Close dialogs and suggestions. | +| `Ctrl+C` | Exit the application. Press twice to confirm. | +| `Ctrl+D` | Exit the application if the input is empty. Press twice to confirm. | +| `Ctrl+L` | Clear the screen. | +| `Ctrl+O` | Toggle the display of the debug console. | +| `Ctrl+S` | Allows long responses to print fully, disabling truncation. Use your terminal's scrollback to view the entire output. | +| `Ctrl+T` | Toggle the display of tool descriptions. | +| `Ctrl+Y` | Toggle auto-approval (YOLO mode) for all tool calls. | + +## Input Prompt + +| Shortcut | Description | +| -------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- | +| `!` | Toggle shell mode when the input is empty. | +| `\` (at end of line) + `Enter` | Insert a newline. | +| `Down Arrow` | Navigate down through the input history. | +| `Enter` | Submit the current prompt. | +| `Meta+Delete` / `Ctrl+Delete` | Delete the word to the right of the cursor. | +| `Tab` | Autocomplete the current suggestion if one exists. | +| `Up Arrow` | Navigate up through the input history. | +| `Ctrl+A` / `Home` | Move the cursor to the beginning of the line. | +| `Ctrl+B` / `Left Arrow` | Move the cursor one character to the left. | +| `Ctrl+C` | Clear the input prompt | +| `Ctrl+D` / `Delete` | Delete the character to the right of the cursor. | +| `Ctrl+E` / `End` | Move the cursor to the end of the line. | +| `Ctrl+F` / `Right Arrow` | Move the cursor one character to the right. | +| `Ctrl+H` / `Backspace` | Delete the character to the left of the cursor. | +| `Ctrl+K` | Delete from the cursor to the end of the line. | +| `Ctrl+Left Arrow` / `Meta+Left Arrow` / `Meta+B` | Move the cursor one word to the left. | +| `Ctrl+N` | Navigate down through the input history. | +| `Ctrl+P` | Navigate up through the input history. | +| `Ctrl+Right Arrow` / `Meta+Right Arrow` / `Meta+F` | Move the cursor one word to the right. | +| `Ctrl+U` | Delete from the cursor to the beginning of the line. | +| `Ctrl+V` | Paste clipboard content. If the clipboard contains an image, it will be saved and a reference to it will be inserted in the prompt. | +| `Ctrl+W` / `Meta+Backspace` / `Ctrl+Backspace` | Delete the word to the left of the cursor. | +| `Ctrl+X` / `Meta+Enter` | Open the current input in an external editor. | + +## Suggestions + +| Shortcut | Description | +| --------------- | -------------------------------------- | +| `Down Arrow` | Navigate down through the suggestions. | +| `Tab` / `Enter` | Accept the selected suggestion. | +| `Up Arrow` | Navigate up through the suggestions. | + +## Radio Button Select + +| Shortcut | Description | +| ------------------ | ------------------------------------------------------------------------------------------------------------- | +| `Down Arrow` / `j` | Move selection down. | +| `Enter` | Confirm selection. | +| `Up Arrow` / `k` | Move selection up. | +| `1-9` | Select an item by its number. | +| (multi-digit) | For items with numbers greater than 9, press the digits in quick succession to select the corresponding item. | diff --git a/docs/sandbox.md b/docs/sandbox.md index 87763685..20a1a3b5 100644 --- a/docs/sandbox.md +++ b/docs/sandbox.md @@ -77,6 +77,24 @@ Built-in profiles (set via `SEATBELT_PROFILE` env var): - `restrictive-open`: Strict restrictions, network allowed - `restrictive-closed`: Maximum restrictions +### Custom Sandbox Flags + +For container-based sandboxing, you can inject custom flags into the `docker` or `podman` command using the `SANDBOX_FLAGS` environment variable. This is useful for advanced configurations, such as disabling security features for specific use cases. + +**Example (Podman)**: + +To disable SELinux labeling for volume mounts, you can set the following: + +```bash +export SANDBOX_FLAGS="--security-opt label=disable" +``` + +Multiple flags can be provided as a space-separated string: + +```bash +export SANDBOX_FLAGS="--flag1 --flag2=value" +``` + ## Linux UID/GID handling The sandbox automatically handles user permissions on Linux. Override these permissions with: @@ -111,6 +129,8 @@ export SANDBOX_SET_UID_GID=false # Disable UID/GID mapping DEBUG=1 gemini -s -p "debug command" ``` +**Note:** If you have `DEBUG=true` in a project's `.env` file, it won't affect gemini-cli due to automatic exclusion. Use `.gemini/.env` files for gemini-cli specific debug settings. + ### Inspect sandbox ```bash diff --git a/docs/telemetry.md b/docs/telemetry.md index 2209ee0b..c7b88ba9 100644 --- a/docs/telemetry.md +++ b/docs/telemetry.md @@ -209,6 +209,11 @@ Logs are timestamped records of specific events. The following events are logged - **Attributes**: - `auth_type` +- `gemini_cli.slash_command`: This event occurs when a user executes a slash command. + - **Attributes**: + - `command` (string) + - `subcommand` (string, if applicable) + ### Metrics Metrics are numerical measurements of behavior over time. The following metrics are collected for Gemini CLI: diff --git a/docs/tools/mcp-server.md b/docs/tools/mcp-server.md index cd70da04..050e10e8 100644 --- a/docs/tools/mcp-server.md +++ b/docs/tools/mcp-server.md @@ -570,3 +570,70 @@ The MCP integration tracks several states: - **Conflict resolution:** Tool name conflicts between servers are resolved through automatic prefixing This comprehensive integration makes MCP servers a powerful way to extend the Gemini CLI's capabilities while maintaining security, reliability, and ease of use. + +## MCP Prompts as Slash Commands + +In addition to tools, MCP servers can expose predefined prompts that can be executed as slash commands within the Gemini CLI. This allows you to create shortcuts for common or complex queries that can be easily invoked by name. + +### Defining Prompts on the Server + +Here's a small example of a stdio MCP server that defines prompts: + +```ts +import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js'; +import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js'; +import { z } from 'zod'; + +const server = new McpServer({ + name: 'prompt-server', + version: '1.0.0', +}); + +server.registerPrompt( + 'poem-writer', + { + title: 'Poem Writer', + description: 'Write a nice haiku', + argsSchema: { title: z.string(), mood: z.string().optional() }, + }, + ({ title, mood }) => ({ + messages: [ + { + role: 'user', + content: { + type: 'text', + text: `Write a haiku${mood ? ` with the mood ${mood}` : ''} called ${title}. Note that a haiku is 5 syllables followed by 7 syllables followed by 5 syllables `, + }, + }, + ], + }), +); + +const transport = new StdioServerTransport(); +await server.connect(transport); +``` + +This can be included in `settings.json` under `mcpServers` with: + +```json +"nodeServer": { + "command": "node", + "args": ["filename.ts"], +} +``` + +### Invoking Prompts + +Once a prompt is discovered, you can invoke it using its name as a slash command. The CLI will automatically handle parsing arguments. + +```bash +/poem-writer --title="Gemini CLI" --mood="reverent" +``` + +or, using positional arguments: + +```bash +/poem-writer "Gemini CLI" reverent +``` + +When you run this command, the Gemini CLI executes the `prompts/get` method on the MCP server with the provided arguments. The server is responsible for substituting the arguments into the prompt template and returning the final prompt text. The CLI then sends this prompt to the model for execution. This provides a convenient way to automate and share common workflows. diff --git a/docs/tools/multi-file.md b/docs/tools/multi-file.md index 0cd1e19e..1bc495f6 100644 --- a/docs/tools/multi-file.md +++ b/docs/tools/multi-file.md @@ -11,11 +11,13 @@ Use `read_many_files` to read content from multiple files specified by paths or `read_many_files` can be used to perform tasks such as getting an overview of a codebase, finding where specific functionality is implemented, reviewing documentation, or gathering context from multiple configuration files. +**Note:** `read_many_files` looks for files following the provided paths or glob patterns. A directory path such as `"/docs"` will return an empty result; the tool requires a pattern such as `"/docs/*"` or `"/docs/*.md"` to identify the relevant files. + ### Arguments `read_many_files` takes the following arguments: -- `paths` (list[string], required): An array of glob patterns or paths relative to the tool's target directory (e.g., `["src/**/*.ts"]`, `["README.md", "docs/", "assets/logo.png"]`). +- `paths` (list[string], required): An array of glob patterns or paths relative to the tool's target directory (e.g., `["src/**/*.ts"]`, `["README.md", "docs/*", "assets/logo.png"]`). - `exclude` (list[string], optional): Glob patterns for files/directories to exclude (e.g., `["**/*.log", "temp/"]`). These are added to default excludes if `useDefaultExcludes` is true. - `include` (list[string], optional): Additional glob patterns to include. These are merged with `paths` (e.g., `["*.test.ts"]` to specifically add test files if they were broadly excluded, or `["images/*.jpg"]` to include specific image types). - `recursive` (boolean, optional): Whether to search recursively. This is primarily controlled by `**` in glob patterns. Defaults to `true`. diff --git a/docs/tos-privacy.md b/docs/tos-privacy.md index b2cbbc29..f7c8afe5 100644 --- a/docs/tos-privacy.md +++ b/docs/tos-privacy.md @@ -63,6 +63,8 @@ You may opt-out from sending Usage Statistics to Google by following the instruc Whether your code, including prompts and answers, is used to train Google's models depends on the type of authentication method you use and your account type. +By default (if you have not opted out): + - **Google account with Gemini Code Assist for Individuals**: Yes. When you use your personal Google account, the [Gemini Code Assist Privacy Notice for Individuals](https://developers.google.com/gemini-code-assist/resources/privacy-notice-gemini-code-assist-individuals) applies. Under this notice, your **prompts, answers, and related code are collected** and may be used to improve Google's products, including for model training. - **Google account with Gemini Code Assist for Workspace, Standard, or Enterprise**: No. For these accounts, your data is governed by the [Gemini Code Assist Privacy Notices](https://cloud.google.com/gemini/docs/codeassist/security-privacy-compliance#standard_and_enterprise_data_protection_and_privacy) terms, which treat your inputs as confidential. Your **prompts, answers, and related code are not collected** and are not used to train models. @@ -71,17 +73,21 @@ Whether your code, including prompts and answers, is used to train Google's mode - **Paid services**: No. When you use the Gemini API key via the Gemini Developer API with a paid service, the [Gemini API Terms of Service - Paid Services](https://ai.google.dev/gemini-api/terms#paid-services) terms apply, which treats your inputs as confidential. Your **prompts, answers, and related code are not collected** and are not used to train models. - **Gemini API key via the Vertex AI GenAI API**: No. For these accounts, your data is governed by the [Google Cloud Privacy Notice](https://cloud.google.com/terms/cloud-privacy-notice) terms, which treat your inputs as confidential. Your **prompts, answers, and related code are not collected** and are not used to train models. +For more information about opting out, refer to the next question. + ### 2. What are Usage Statistics and what does the opt-out control? The **Usage Statistics** setting is the single control for all optional data collection in the Gemini CLI. The data it collects depends on your account and authentication type: -- **Google account with Gemini Code Assist for Individuals**: When enabled, this setting allows Google to collect both anonymous telemetry (for example, commands run and performance metrics) and **your prompts and answers** for model improvement. -- **Google account with Gemini Code Assist for Workspace, Standard, or Enterprise**: This setting only controls the collection of anonymous telemetry. Your prompts and answers are never collected, regardless of this setting. +- **Google account with Gemini Code Assist for Individuals**: When enabled, this setting allows Google to collect both anonymous telemetry (for example, commands run and performance metrics) and **your prompts and answers, including code,** for model improvement. +- **Google account with Gemini Code Assist for Workspace, Standard, or Enterprise**: This setting only controls the collection of anonymous telemetry. Your prompts and answers, including code, are never collected, regardless of this setting. - **Gemini API key via the Gemini Developer API**: - **Unpaid services**: When enabled, this setting allows Google to collect both anonymous telemetry (like commands run and performance metrics) and **your prompts and answers** for model improvement. When disabled we will use your data as described in [How Google Uses Your Data](https://ai.google.dev/gemini-api/terms#data-use-unpaid). + **Unpaid services**: When enabled, this setting allows Google to collect both anonymous telemetry (like commands run and performance metrics) and **your prompts and answers, including code,** for model improvement. When disabled we will use your data as described in [How Google Uses Your Data](https://ai.google.dev/gemini-api/terms#data-use-unpaid). **Paid services**: This setting only controls the collection of anonymous telemetry. Google logs prompts and responses for a limited period of time, solely for the purpose of detecting violations of the Prohibited Use Policy and any required legal or regulatory disclosures. -- **Gemini API key via the Vertex AI GenAI API:** This setting only controls the collection of anonymous telemetry. Your prompts and answers are never collected, regardless of this setting. +- **Gemini API key via the Vertex AI GenAI API:** This setting only controls the collection of anonymous telemetry. Your prompts and answers, including code, are never collected, regardless of this setting. + +Please refer to the Privacy Notice that applies to your authentication method for more information about what data is collected and how this data is used. You can disable Usage Statistics for any account type by following the instructions in the [Usage Statistics Configuration](./cli/configuration.md#usage-statistics) documentation. diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md index fa88e26e..8c500445 100644 --- a/docs/troubleshooting.md +++ b/docs/troubleshooting.md @@ -53,6 +53,11 @@ This guide provides solutions to common issues and debugging tips. - **Cause:** The `is-in-ci` package checks for the presence of `CI`, `CONTINUOUS_INTEGRATION`, or any environment variable with a `CI_` prefix. When any of these are found, it signals that the environment is non-interactive, which prevents the CLI from starting in its interactive mode. - **Solution:** If the `CI_` prefixed variable is not needed for the CLI to function, you can temporarily unset it for the command. e.g., `env -u CI_TOKEN gemini` +- **DEBUG mode not working from project .env file** + - **Issue:** Setting `DEBUG=true` in a project's `.env` file doesn't enable debug mode for gemini-cli. + - **Cause:** The `DEBUG` and `DEBUG_MODE` variables are automatically excluded from project `.env` files to prevent interference with gemini-cli behavior. + - **Solution:** Use a `.gemini/.env` file instead, or configure the `excludedProjectEnvVars` setting in your `settings.json` to exclude fewer variables. + ## Debugging Tips - **CLI debugging:** diff --git a/eslint.config.js b/eslint.config.js index 169bbd17..a1194df7 100644 --- a/eslint.config.js +++ b/eslint.config.js @@ -34,6 +34,7 @@ export default tseslint.config( 'packages/server/dist/**', 'packages/vscode-ide-companion/dist/**', 'bundle/**', + 'package/bundle/**', ], }, eslint.configs.recommended, @@ -203,6 +204,21 @@ export default tseslint.config( '@typescript-eslint/no-require-imports': 'off', }, }, + // extra settings for scripts that we run directly with node + { + files: ['packages/vscode-ide-companion/scripts/**/*.js'], + languageOptions: { + globals: { + ...globals.node, + process: 'readonly', + console: 'readonly', + }, + }, + rules: { + 'no-restricted-syntax': 'off', + '@typescript-eslint/no-require-imports': 'off', + }, + }, // Prettier config must be last prettierConfig, // extra settings for scripts that we run directly with node diff --git a/integration-tests/file-system.test.js b/integration-tests/file-system.test.js index 87e9efe2..d43f047f 100644 --- a/integration-tests/file-system.test.js +++ b/integration-tests/file-system.test.js @@ -6,25 +6,84 @@ import { strict as assert } from 'assert'; import { test } from 'node:test'; -import { TestRig } from './test-helper.js'; +import { TestRig, printDebugInfo, validateModelOutput } from './test-helper.js'; -test('reads a file', (t) => { +test('should be able to read a file', async () => { const rig = new TestRig(); - rig.setup(t.name); + await rig.setup('should be able to read a file'); rig.createFile('test.txt', 'hello world'); - const output = rig.run(`read the file name test.txt`); + const result = await rig.run( + `read the file test.txt and show me its contents`, + ); - assert.ok(output.toLowerCase().includes('hello')); + const foundToolCall = await rig.waitForToolCall('read_file'); + + // Add debugging information + if (!foundToolCall || !result.includes('hello world')) { + printDebugInfo(rig, result, { + 'Found tool call': foundToolCall, + 'Contains hello world': result.includes('hello world'), + }); + } + + assert.ok(foundToolCall, 'Expected to find a read_file tool call'); + + // Validate model output - will throw if no output, warn if missing expected content + validateModelOutput(result, 'hello world', 'File read test'); }); -test('writes a file', (t) => { +test('should be able to write a file', async () => { const rig = new TestRig(); - rig.setup(t.name); + await rig.setup('should be able to write a file'); rig.createFile('test.txt', ''); - rig.run(`edit test.txt to have a hello world message`); + const result = await rig.run(`edit test.txt to have a hello world message`); + + // Accept multiple valid tools for editing files + const foundToolCall = await rig.waitForAnyToolCall([ + 'write_file', + 'edit', + 'replace', + ]); + + // Add debugging information + if (!foundToolCall) { + printDebugInfo(rig, result); + } + + assert.ok( + foundToolCall, + 'Expected to find a write_file, edit, or replace tool call', + ); + + // Validate model output - will throw if no output + validateModelOutput(result, null, 'File write test'); const fileContent = rig.readFile('test.txt'); - assert.ok(fileContent.toLowerCase().includes('hello')); + + // Add debugging for file content + if (!fileContent.toLowerCase().includes('hello')) { + const writeCalls = rig + .readToolLogs() + .filter((t) => t.toolRequest.name === 'write_file') + .map((t) => t.toolRequest.args); + + printDebugInfo(rig, result, { + 'File content mismatch': true, + 'Expected to contain': 'hello', + 'Actual content': fileContent, + 'Write tool calls': JSON.stringify(writeCalls), + }); + } + + assert.ok( + fileContent.toLowerCase().includes('hello'), + 'Expected file to contain hello', + ); + + // Log success info if verbose + if (process.env.VERBOSE === 'true') { + console.log('File written successfully with hello message.'); + } }); diff --git a/integration-tests/google_web_search.test.js b/integration-tests/google_web_search.test.js index a8968117..31747421 100644 --- a/integration-tests/google_web_search.test.js +++ b/integration-tests/google_web_search.test.js @@ -6,14 +6,69 @@ import { test } from 'node:test'; import { strict as assert } from 'assert'; -import { TestRig } from './test-helper.js'; +import { TestRig, printDebugInfo, validateModelOutput } from './test-helper.js'; -test('should be able to search the web', async (t) => { +test('should be able to search the web', async () => { const rig = new TestRig(); - rig.setup(t.name); + await rig.setup('should be able to search the web'); - const prompt = `what planet do we live on`; - const result = await rig.run(prompt); + let result; + try { + result = await rig.run(`what is the weather in London`); + } catch (error) { + // Network errors can occur in CI environments + if ( + error.message.includes('network') || + error.message.includes('timeout') + ) { + console.warn('Skipping test due to network error:', error.message); + return; // Skip the test + } + throw error; // Re-throw if not a network error + } - assert.ok(result.toLowerCase().includes('earth')); + const foundToolCall = await rig.waitForToolCall('google_web_search'); + + // Add debugging information + if (!foundToolCall) { + const allTools = printDebugInfo(rig, result); + + // Check if the tool call failed due to network issues + const failedSearchCalls = allTools.filter( + (t) => + t.toolRequest.name === 'google_web_search' && !t.toolRequest.success, + ); + if (failedSearchCalls.length > 0) { + console.warn( + 'google_web_search tool was called but failed, possibly due to network issues', + ); + console.warn( + 'Failed calls:', + failedSearchCalls.map((t) => t.toolRequest.args), + ); + return; // Skip the test if network issues + } + } + + assert.ok(foundToolCall, 'Expected to find a call to google_web_search'); + + // Validate model output - will throw if no output, warn if missing expected content + const hasExpectedContent = validateModelOutput( + result, + ['weather', 'london'], + 'Google web search test', + ); + + // If content was missing, log the search queries used + if (!hasExpectedContent) { + const searchCalls = rig + .readToolLogs() + .filter((t) => t.toolRequest.name === 'google_web_search'); + if (searchCalls.length > 0) { + console.warn( + 'Search queries used:', + searchCalls.map((t) => t.toolRequest.args), + ); + } + } }); diff --git a/integration-tests/list_directory.test.js b/integration-tests/list_directory.test.js index af7aae78..16f49f4b 100644 --- a/integration-tests/list_directory.test.js +++ b/integration-tests/list_directory.test.js @@ -6,19 +6,57 @@ import { test } from 'node:test'; import { strict as assert } from 'assert'; -import { TestRig } from './test-helper.js'; +import { TestRig, printDebugInfo, validateModelOutput } from './test-helper.js'; +import { existsSync } from 'fs'; +import { join } from 'path'; -test('should be able to list a directory', async (t) => { +test('should be able to list a directory', async () => { const rig = new TestRig(); - rig.setup(t.name); + await rig.setup('should be able to list a directory'); rig.createFile('file1.txt', 'file 1 content'); rig.mkdir('subdir'); rig.sync(); - const prompt = `Can you list the files in the current directory. Display them in the style of 'ls'`; - const result = rig.run(prompt); + // Poll for filesystem changes to propagate in containers + await rig.poll( + () => { + // Check if the files exist in the test directory + const file1Path = join(rig.testDir, 'file1.txt'); + const subdirPath = join(rig.testDir, 'subdir'); + return existsSync(file1Path) && existsSync(subdirPath); + }, + 1000, // 1 second max wait + 50, // check every 50ms + ); - const lines = result.split('\n').filter((line) => line.trim() !== ''); - assert.ok(lines.some((line) => line.includes('file1.txt'))); - assert.ok(lines.some((line) => line.includes('subdir'))); + const prompt = `Can you list the files in the current directory. Display them in the style of 'ls'`; + + const result = await rig.run(prompt); + + const foundToolCall = await rig.waitForToolCall('list_directory'); + + // Add debugging information + if ( + !foundToolCall || + !result.includes('file1.txt') || + !result.includes('subdir') + ) { + const allTools = printDebugInfo(rig, result, { + 'Found tool call': foundToolCall, + 'Contains file1.txt': result.includes('file1.txt'), + 'Contains subdir': result.includes('subdir'), + }); + + console.error( + 'List directory calls:', + allTools + .filter((t) => t.toolRequest.name === 'list_directory') + .map((t) => t.toolRequest.args), + ); + } + + assert.ok(foundToolCall, 'Expected to find a list_directory tool call'); + + // Validate model output - will throw if no output, warn if missing expected content + validateModelOutput(result, ['file1.txt', 'subdir'], 'List directory test'); }); diff --git a/integration-tests/read_many_files.test.js b/integration-tests/read_many_files.test.js index 7e770036..74d2f358 100644 --- a/integration-tests/read_many_files.test.js +++ b/integration-tests/read_many_files.test.js @@ -6,17 +6,45 @@ import { test } from 'node:test'; import { strict as assert } from 'assert'; -import { TestRig } from './test-helper.js'; +import { TestRig, printDebugInfo, validateModelOutput } from './test-helper.js'; -test.skip('should be able to read multiple files', async (t) => { +test('should be able to read multiple files', async () => { const rig = new TestRig(); - rig.setup(t.name); + await rig.setup('should be able to read multiple files'); rig.createFile('file1.txt', 'file 1 content'); rig.createFile('file2.txt', 'file 2 content'); - const prompt = `Read the files in this directory, list them and print them to the screen`; + const prompt = `Please use read_many_files to read file1.txt and file2.txt and show me what's in them`; + const result = await rig.run(prompt); - assert.ok(result.includes('file 1 content')); - assert.ok(result.includes('file 2 content')); + // Check for either read_many_files or multiple read_file calls + const allTools = rig.readToolLogs(); + const readManyFilesCall = await rig.waitForToolCall('read_many_files'); + const readFileCalls = allTools.filter( + (t) => t.toolRequest.name === 'read_file', + ); + + // Accept either read_many_files OR at least 2 read_file calls + const foundValidPattern = readManyFilesCall || readFileCalls.length >= 2; + + // Add debugging information + if (!foundValidPattern) { + printDebugInfo(rig, result, { + 'read_many_files called': readManyFilesCall, + 'read_file calls': readFileCalls.length, + }); + } + + assert.ok( + foundValidPattern, + 'Expected to find either read_many_files or multiple read_file tool calls', + ); + + // Validate model output - will throw if no output, warn if missing expected content + validateModelOutput( + result, + ['file 1 content', 'file 2 content'], + 'Read many files test', + ); }); diff --git a/integration-tests/replace.test.js b/integration-tests/replace.test.js index 060aba55..1ac6f5a4 100644 --- a/integration-tests/replace.test.js +++ b/integration-tests/replace.test.js @@ -6,17 +6,61 @@ import { test } from 'node:test'; import { strict as assert } from 'assert'; -import { TestRig } from './test-helper.js'; +import { TestRig, printDebugInfo, validateModelOutput } from './test-helper.js'; -test('should be able to replace content in a file', async (t) => { +test('should be able to replace content in a file', async () => { const rig = new TestRig(); - rig.setup(t.name); + await rig.setup('should be able to replace content in a file'); const fileName = 'file_to_replace.txt'; - rig.createFile(fileName, 'original content'); + const originalContent = 'original content'; + const expectedContent = 'replaced content'; + + rig.createFile(fileName, originalContent); const prompt = `Can you replace 'original' with 'replaced' in the file 'file_to_replace.txt'`; - await rig.run(prompt); + const result = await rig.run(prompt); + + const foundToolCall = await rig.waitForToolCall('replace'); + + // Add debugging information + if (!foundToolCall) { + printDebugInfo(rig, result); + } + + assert.ok(foundToolCall, 'Expected to find a replace tool call'); + + // Validate model output - will throw if no output, warn if missing expected content + validateModelOutput( + result, + ['replaced', 'file_to_replace.txt'], + 'Replace content test', + ); + const newFileContent = rig.readFile(fileName); - assert.strictEqual(newFileContent, 'replaced content'); + + // Add debugging for file content + if (newFileContent !== expectedContent) { + console.error('File content mismatch - Debug info:'); + console.error('Expected:', expectedContent); + console.error('Actual:', newFileContent); + console.error( + 'Tool calls:', + rig.readToolLogs().map((t) => ({ + name: t.toolRequest.name, + args: t.toolRequest.args, + })), + ); + } + + assert.strictEqual( + newFileContent, + expectedContent, + 'File content should be updated correctly', + ); + + // Log success info if verbose + if (process.env.VERBOSE === 'true') { + console.log('File replaced successfully. New content:', newFileContent); + } }); diff --git a/integration-tests/run-tests.js b/integration-tests/run-tests.js index 4b4a9a94..05fb349e 100644 --- a/integration-tests/run-tests.js +++ b/integration-tests/run-tests.js @@ -101,6 +101,7 @@ async function main() { KEEP_OUTPUT: keepOutput.toString(), VERBOSE: verbose.toString(), TEST_FILE_NAME: testFileName, + TELEMETRY_LOG_FILE: join(testFileDir, 'telemetry.log'), }, }); diff --git a/integration-tests/run_shell_command.test.js b/integration-tests/run_shell_command.test.js index 52aee194..2a5f9ed4 100644 --- a/integration-tests/run_shell_command.test.js +++ b/integration-tests/run_shell_command.test.js @@ -6,26 +6,58 @@ import { test } from 'node:test'; import { strict as assert } from 'assert'; -import { TestRig } from './test-helper.js'; +import { TestRig, printDebugInfo, validateModelOutput } from './test-helper.js'; -test('should be able to run a shell command', async (t) => { +test('should be able to run a shell command', async () => { const rig = new TestRig(); - rig.setup(t.name); - rig.createFile('blah.txt', 'some content'); + await rig.setup('should be able to run a shell command'); - const prompt = `Can you use ls to list the contexts of the current folder`; - const result = rig.run(prompt); + const prompt = `Please run the command "echo hello-world" and show me the output`; - assert.ok(result.includes('blah.txt')); + const result = await rig.run(prompt); + + const foundToolCall = await rig.waitForToolCall('run_shell_command'); + + // Add debugging information + if (!foundToolCall || !result.includes('hello-world')) { + printDebugInfo(rig, result, { + 'Found tool call': foundToolCall, + 'Contains hello-world': result.includes('hello-world'), + }); + } + + assert.ok(foundToolCall, 'Expected to find a run_shell_command tool call'); + + // Validate model output - will throw if no output, warn if missing expected content + // Model often reports exit code instead of showing output + validateModelOutput( + result, + ['hello-world', 'exit code 0'], + 'Shell command test', + ); }); -test('should be able to run a shell command via stdin', async (t) => { +test('should be able to run a shell command via stdin', async () => { const rig = new TestRig(); - rig.setup(t.name); - rig.createFile('blah.txt', 'some content'); + await rig.setup('should be able to run a shell command via stdin'); - const prompt = `Can you use ls to list the contexts of the current folder`; - const result = rig.run({ stdin: prompt }); + const prompt = `Please run the command "echo test-stdin" and show me what it outputs`; - assert.ok(result.includes('blah.txt')); + const result = await rig.run({ stdin: prompt }); + + const foundToolCall = await rig.waitForToolCall('run_shell_command'); + + // Add debugging information + if (!foundToolCall || !result.includes('test-stdin')) { + printDebugInfo(rig, result, { + 'Test type': 'Stdin test', + 'Found tool call': foundToolCall, + 'Contains test-stdin': result.includes('test-stdin'), + }); + } + + assert.ok(foundToolCall, 'Expected to find a run_shell_command tool call'); + + // Validate model output - will throw if no output, warn if missing expected content + validateModelOutput(result, 'test-stdin', 'Shell command stdin test'); }); diff --git a/integration-tests/save_memory.test.js b/integration-tests/save_memory.test.js index 0716f978..3ec641d4 100644 --- a/integration-tests/save_memory.test.js +++ b/integration-tests/save_memory.test.js @@ -6,16 +6,36 @@ import { test } from 'node:test'; import { strict as assert } from 'assert'; -import { TestRig } from './test-helper.js'; +import { TestRig, printDebugInfo, validateModelOutput } from './test-helper.js'; -test('should be able to save to memory', async (t) => { +test('should be able to save to memory', async () => { const rig = new TestRig(); - rig.setup(t.name); + await rig.setup('should be able to save to memory'); const prompt = `remember that my favorite color is blue. what is my favorite color? tell me that and surround it with $ symbol`; const result = await rig.run(prompt); - assert.ok(result.toLowerCase().includes('$blue$')); + const foundToolCall = await rig.waitForToolCall('save_memory'); + + // Add debugging information + if (!foundToolCall || !result.toLowerCase().includes('blue')) { + const allTools = printDebugInfo(rig, result, { + 'Found tool call': foundToolCall, + 'Contains blue': result.toLowerCase().includes('blue'), + }); + + console.error( + 'Memory tool calls:', + allTools + .filter((t) => t.toolRequest.name === 'save_memory') + .map((t) => t.toolRequest.args), + ); + } + + assert.ok(foundToolCall, 'Expected to find a save_memory tool call'); + + // Validate model output - will throw if no output, warn if missing expected content + validateModelOutput(result, 'blue', 'Save memory test'); }); diff --git a/integration-tests/simple-mcp-server.test.js b/integration-tests/simple-mcp-server.test.js index fc88522d..987f69d2 100644 --- a/integration-tests/simple-mcp-server.test.js +++ b/integration-tests/simple-mcp-server.test.js @@ -4,67 +4,208 @@ * SPDX-License-Identifier: Apache-2.0 */ -import { test, describe, before, after } from 'node:test'; +/** + * This test verifies MCP (Model Context Protocol) server integration. + * It uses a minimal MCP server implementation that doesn't require + * external dependencies, making it compatible with Docker sandbox mode. + */ + +import { test, describe, before } from 'node:test'; import { strict as assert } from 'node:assert'; -import { TestRig } from './test-helper.js'; -import { spawn } from 'child_process'; +import { TestRig, validateModelOutput } from './test-helper.js'; import { join } from 'path'; import { fileURLToPath } from 'url'; -import { writeFileSync, unlinkSync } from 'fs'; +import { writeFileSync } from 'fs'; const __dirname = fileURLToPath(new URL('.', import.meta.url)); -const serverScriptPath = join(__dirname, './temp-server.js'); -const serverScript = ` -import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js'; -import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js'; -import { z } from 'zod'; +// Create a minimal MCP server that doesn't require external dependencies +// This implements the MCP protocol directly using Node.js built-ins +const serverScript = `#!/usr/bin/env node +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ -const server = new McpServer({ - name: 'addition-server', - version: '1.0.0', +const readline = require('readline'); +const fs = require('fs'); + +// Debug logging to stderr (only when MCP_DEBUG or VERBOSE is set) +const debugEnabled = process.env.MCP_DEBUG === 'true' || process.env.VERBOSE === 'true'; +function debug(msg) { + if (debugEnabled) { + fs.writeSync(2, \`[MCP-DEBUG] \${msg}\\n\`); + } +} + +debug('MCP server starting...'); + +// Simple JSON-RPC implementation for MCP +class SimpleJSONRPC { + constructor() { + this.handlers = new Map(); + this.rl = readline.createInterface({ + input: process.stdin, + output: process.stdout, + terminal: false + }); + + this.rl.on('line', (line) => { + debug(\`Received line: \${line}\`); + try { + const message = JSON.parse(line); + debug(\`Parsed message: \${JSON.stringify(message)}\`); + this.handleMessage(message); + } catch (e) { + debug(\`Parse error: \${e.message}\`); + } + }); + } + + send(message) { + const msgStr = JSON.stringify(message); + debug(\`Sending message: \${msgStr}\`); + process.stdout.write(msgStr + '\\n'); + } + + async handleMessage(message) { + if (message.method && this.handlers.has(message.method)) { + try { + const result = await this.handlers.get(message.method)(message.params || {}); + if (message.id !== undefined) { + this.send({ + jsonrpc: '2.0', + id: message.id, + result + }); + } + } catch (error) { + if (message.id !== undefined) { + this.send({ + jsonrpc: '2.0', + id: message.id, + error: { + code: -32603, + message: error.message + } + }); + } + } + } else if (message.id !== undefined) { + this.send({ + jsonrpc: '2.0', + id: message.id, + error: { + code: -32601, + message: 'Method not found' + } + }); + } + } + + on(method, handler) { + this.handlers.set(method, handler); + } +} + +// Create MCP server +const rpc = new SimpleJSONRPC(); + +// Handle initialize +rpc.on('initialize', async (params) => { + debug('Handling initialize request'); + return { + protocolVersion: '2024-11-05', + capabilities: { + tools: {} + }, + serverInfo: { + name: 'addition-server', + version: '1.0.0' + } + }; }); -server.registerTool( - 'add', - { - title: 'Addition Tool', - description: 'Add two numbers', - inputSchema: { a: z.number(), b: z.number() }, - }, - async ({ a, b }) => ({ - content: [{ type: 'text', text: String(a + b) }], - }), -); +// Handle tools/list +rpc.on('tools/list', async () => { + debug('Handling tools/list request'); + return { + tools: [{ + name: 'add', + description: 'Add two numbers', + inputSchema: { + type: 'object', + properties: { + a: { type: 'number', description: 'First number' }, + b: { type: 'number', description: 'Second number' } + }, + required: ['a', 'b'] + } + }] + }; +}); -const transport = new StdioServerTransport(); -await server.connect(transport); +// Handle tools/call +rpc.on('tools/call', async (params) => { + debug(\`Handling tools/call request for tool: \${params.name}\`); + if (params.name === 'add') { + const { a, b } = params.arguments; + return { + content: [{ + type: 'text', + text: String(a + b) + }] + }; + } + throw new Error('Unknown tool: ' + params.name); +}); + +// Send initialization notification +rpc.send({ + jsonrpc: '2.0', + method: 'initialized' +}); `; describe('simple-mcp-server', () => { const rig = new TestRig(); - let child; - before(() => { - writeFileSync(serverScriptPath, serverScript); - child = spawn('node', [serverScriptPath], { - stdio: ['pipe', 'pipe', 'pipe'], + before(async () => { + // Setup test directory with MCP server configuration + await rig.setup('simple-mcp-server', { + settings: { + mcpServers: { + 'addition-server': { + command: 'node', + args: ['mcp-server.cjs'], + }, + }, + }, }); - child.stderr.on('data', (data) => { - console.error(`stderr: ${data}`); - }); - // Wait for the server to be ready - return new Promise((resolve) => setTimeout(resolve, 2000)); + + // Create server script in the test directory + const testServerPath = join(rig.testDir, 'mcp-server.cjs'); + writeFileSync(testServerPath, serverScript); + + // Make the script executable (though running with 'node' should work anyway) + if (process.platform !== 'win32') { + const { chmodSync } = await import('fs'); + chmodSync(testServerPath, 0o755); + } }); - after(() => { - child.kill(); - unlinkSync(serverScriptPath); - }); + test('should add two numbers', async () => { + // Test directory is already set up in before hook + // Just run the command - MCP server config is in settings.json + const output = await rig.run('add 5 and 10'); - test('should add two numbers', () => { - rig.setup('should add two numbers'); - const output = rig.run('add 5 and 10'); - assert.ok(output.includes('15')); + const foundToolCall = await rig.waitForToolCall('add'); + + assert.ok(foundToolCall, 'Expected to find an add tool call'); + + // Validate model output - will throw if no output, fail if missing expected content + validateModelOutput(output, '15', 'MCP server test'); + assert.ok(output.includes('15'), 'Expected output to contain the sum (15)'); }); }); diff --git a/integration-tests/test-helper.js b/integration-tests/test-helper.js index 7ee3db87..9526ea5f 100644 --- a/integration-tests/test-helper.js +++ b/integration-tests/test-helper.js @@ -4,11 +4,13 @@ * SPDX-License-Identifier: Apache-2.0 */ -import { execSync } from 'child_process'; +import { execSync, spawn } from 'child_process'; +import { parse } from 'shell-quote'; import { mkdirSync, writeFileSync, readFileSync } from 'fs'; import { join, dirname } from 'path'; import { fileURLToPath } from 'url'; import { env } from 'process'; +import { fileExists } from '../scripts/telemetry_utils.js'; const __dirname = dirname(fileURLToPath(import.meta.url)); @@ -19,17 +21,129 @@ function sanitizeTestName(name) { .replace(/-+/g, '-'); } +// Helper to create detailed error messages +export function createToolCallErrorMessage(expectedTools, foundTools, result) { + const expectedStr = Array.isArray(expectedTools) + ? expectedTools.join(' or ') + : expectedTools; + return ( + `Expected to find ${expectedStr} tool call(s). ` + + `Found: ${foundTools.length > 0 ? foundTools.join(', ') : 'none'}. ` + + `Output preview: ${result ? result.substring(0, 200) + '...' : 'no output'}` + ); +} + +// Helper to print debug information when tests fail +export function printDebugInfo(rig, result, context = {}) { + console.error('Test failed - Debug info:'); + console.error('Result length:', result.length); + console.error('Result (first 500 chars):', result.substring(0, 500)); + console.error( + 'Result (last 500 chars):', + result.substring(result.length - 500), + ); + + // Print any additional context provided + Object.entries(context).forEach(([key, value]) => { + console.error(`${key}:`, value); + }); + + // Check what tools were actually called + const allTools = rig.readToolLogs(); + console.error( + 'All tool calls found:', + allTools.map((t) => t.toolRequest.name), + ); + + return allTools; +} + +// Helper to validate model output and warn about unexpected content +export function validateModelOutput( + result, + expectedContent = null, + testName = '', +) { + // First, check if there's any output at all (this should fail the test if missing) + if (!result || result.trim().length === 0) { + throw new Error('Expected LLM to return some output'); + } + + // If expectedContent is provided, check for it and warn if missing + if (expectedContent) { + const contents = Array.isArray(expectedContent) + ? expectedContent + : [expectedContent]; + const missingContent = contents.filter((content) => { + if (typeof content === 'string') { + return !result.toLowerCase().includes(content.toLowerCase()); + } else if (content instanceof RegExp) { + return !content.test(result); + } + return false; + }); + + if (missingContent.length > 0) { + console.warn( + `Warning: LLM did not include expected content in response: ${missingContent.join(', ')}.`, + 'This is not ideal but not a test failure.', + ); + console.warn( + 'The tool was called successfully, which is the main requirement.', + ); + return false; + } else if (process.env.VERBOSE === 'true') { + console.log(`${testName}: Model output validated successfully.`); + } + return true; + } + + return true; +} + export class TestRig { constructor() { this.bundlePath = join(__dirname, '..', 'bundle/gemini.js'); this.testDir = null; } - setup(testName) { + // Get timeout based on environment + getDefaultTimeout() { + if (env.CI) return 60000; // 1 minute in CI + if (env.GEMINI_SANDBOX) return 30000; // 30s in containers + return 15000; // 15s locally + } + + setup(testName, options = {}) { this.testName = testName; const sanitizedName = sanitizeTestName(testName); this.testDir = join(env.INTEGRATION_TEST_FILE_DIR, sanitizedName); mkdirSync(this.testDir, { recursive: true }); + + // Create a settings file to point the CLI to the local collector + const geminiDir = join(this.testDir, '.gemini'); + mkdirSync(geminiDir, { recursive: true }); + // In sandbox mode, use an absolute path for telemetry inside the container + // The container mounts the test directory at the same path as the host + const telemetryPath = + env.GEMINI_SANDBOX && env.GEMINI_SANDBOX !== 'false' + ? join(this.testDir, 'telemetry.log') // Absolute path in test directory + : env.TELEMETRY_LOG_FILE; // Absolute path for non-sandbox + + const settings = { + telemetry: { + enabled: true, + target: 'local', + otlpEndpoint: '', + outfile: telemetryPath, + }, + sandbox: env.GEMINI_SANDBOX !== 'false' ? env.GEMINI_SANDBOX : false, + ...options.settings, // Allow tests to override/add settings + }; + writeFileSync( + join(geminiDir, 'settings.json'), + JSON.stringify(settings, null, 2), + ); } createFile(fileName, content) { @@ -39,7 +153,7 @@ export class TestRig { } mkdir(dir) { - mkdirSync(join(this.testDir, dir)); + mkdirSync(join(this.testDir, dir), { recursive: true }); } sync() { @@ -70,19 +184,88 @@ export class TestRig { command += ` ${args.join(' ')}`; - const output = execSync(command, execOptions); + const commandArgs = parse(command); + const node = commandArgs.shift(); - if (env.KEEP_OUTPUT === 'true' || env.VERBOSE === 'true') { - const testId = `${env.TEST_FILE_NAME.replace( - '.test.js', - '', - )}:${this.testName.replace(/ /g, '-')}`; - console.log(`--- TEST: ${testId} ---`); - console.log(output); - console.log(`--- END TEST: ${testId} ---`); + const child = spawn(node, commandArgs, { + cwd: this.testDir, + stdio: 'pipe', + }); + + let stdout = ''; + let stderr = ''; + + // Handle stdin if provided + if (execOptions.input) { + child.stdin.write(execOptions.input); + child.stdin.end(); } - return output; + child.stdout.on('data', (data) => { + stdout += data; + if (env.KEEP_OUTPUT === 'true' || env.VERBOSE === 'true') { + process.stdout.write(data); + } + }); + + child.stderr.on('data', (data) => { + stderr += data; + if (env.KEEP_OUTPUT === 'true' || env.VERBOSE === 'true') { + process.stderr.write(data); + } + }); + + const promise = new Promise((resolve, reject) => { + child.on('close', (code) => { + if (code === 0) { + // Store the raw stdout for Podman telemetry parsing + this._lastRunStdout = stdout; + + // Filter out telemetry output when running with Podman + // Podman seems to output telemetry to stdout even when writing to file + let result = stdout; + if (env.GEMINI_SANDBOX === 'podman') { + // Remove telemetry JSON objects from output + // They are multi-line JSON objects that start with { and contain telemetry fields + const lines = result.split('\n'); + const filteredLines = []; + let inTelemetryObject = false; + let braceDepth = 0; + + for (const line of lines) { + if (!inTelemetryObject && line.trim() === '{') { + // Check if this might be start of telemetry object + inTelemetryObject = true; + braceDepth = 1; + } else if (inTelemetryObject) { + // Count braces to track nesting + for (const char of line) { + if (char === '{') braceDepth++; + else if (char === '}') braceDepth--; + } + + // Check if we've closed all braces + if (braceDepth === 0) { + inTelemetryObject = false; + // Skip this line (the closing brace) + continue; + } + } else { + // Not in telemetry object, keep the line + filteredLines.push(line); + } + } + + result = filteredLines.join('\n'); + } + resolve(result); + } else { + reject(new Error(`Process exited with code ${code}:\n${stderr}`)); + } + }); + }); + + return promise; } readFile(fileName) { @@ -98,4 +281,312 @@ export class TestRig { } return content; } + + async cleanup() { + // Clean up test directory + if (this.testDir && !env.KEEP_OUTPUT) { + try { + execSync(`rm -rf ${this.testDir}`); + } catch (error) { + // Ignore cleanup errors + if (env.VERBOSE === 'true') { + console.warn('Cleanup warning:', error.message); + } + } + } + } + + async waitForTelemetryReady() { + // In sandbox mode, telemetry is written to a relative path in the test directory + const logFilePath = + env.GEMINI_SANDBOX && env.GEMINI_SANDBOX !== 'false' + ? join(this.testDir, 'telemetry.log') + : env.TELEMETRY_LOG_FILE; + + if (!logFilePath) return; + + // Wait for telemetry file to exist and have content + await this.poll( + () => { + if (!fileExists(logFilePath)) return false; + try { + const content = readFileSync(logFilePath, 'utf-8'); + // Check if file has meaningful content (at least one complete JSON object) + return content.includes('"event.name"'); + } catch (_e) { + return false; + } + }, + 2000, // 2 seconds max - reduced since telemetry should flush on exit now + 100, // check every 100ms + ); + } + + async waitForToolCall(toolName, timeout) { + // Use environment-specific timeout + if (!timeout) { + timeout = this.getDefaultTimeout(); + } + + // Wait for telemetry to be ready before polling for tool calls + await this.waitForTelemetryReady(); + + return this.poll( + () => { + const toolLogs = this.readToolLogs(); + return toolLogs.some((log) => log.toolRequest.name === toolName); + }, + timeout, + 100, + ); + } + + async waitForAnyToolCall(toolNames, timeout) { + // Use environment-specific timeout + if (!timeout) { + timeout = this.getDefaultTimeout(); + } + + // Wait for telemetry to be ready before polling for tool calls + await this.waitForTelemetryReady(); + + return this.poll( + () => { + const toolLogs = this.readToolLogs(); + return toolNames.some((name) => + toolLogs.some((log) => log.toolRequest.name === name), + ); + }, + timeout, + 100, + ); + } + + async poll(predicate, timeout, interval) { + const startTime = Date.now(); + let attempts = 0; + while (Date.now() - startTime < timeout) { + attempts++; + const result = predicate(); + if (env.VERBOSE === 'true' && attempts % 5 === 0) { + console.log( + `Poll attempt ${attempts}: ${result ? 'success' : 'waiting...'}`, + ); + } + if (result) { + return true; + } + await new Promise((resolve) => setTimeout(resolve, interval)); + } + if (env.VERBOSE === 'true') { + console.log(`Poll timed out after ${attempts} attempts`); + } + return false; + } + + _parseToolLogsFromStdout(stdout) { + const logs = []; + + // The console output from Podman is JavaScript object notation, not JSON + // Look for tool call events in the output + // Updated regex to handle tool names with hyphens and underscores + const toolCallPattern = + /body:\s*'Tool call:\s*([\w-]+)\..*?Success:\s*(\w+)\..*?Duration:\s*(\d+)ms\.'/g; + const matches = [...stdout.matchAll(toolCallPattern)]; + + for (const match of matches) { + const toolName = match[1]; + const success = match[2] === 'true'; + const duration = parseInt(match[3], 10); + + // Try to find function_args nearby + const matchIndex = match.index || 0; + const contextStart = Math.max(0, matchIndex - 500); + const contextEnd = Math.min(stdout.length, matchIndex + 500); + const context = stdout.substring(contextStart, contextEnd); + + // Look for function_args in the context + let args = '{}'; + const argsMatch = context.match(/function_args:\s*'([^']+)'/); + if (argsMatch) { + args = argsMatch[1]; + } + + // Also try to find function_name to double-check + // Updated regex to handle tool names with hyphens and underscores + const nameMatch = context.match(/function_name:\s*'([\w-]+)'/); + const actualToolName = nameMatch ? nameMatch[1] : toolName; + + logs.push({ + timestamp: Date.now(), + toolRequest: { + name: actualToolName, + args: args, + success: success, + duration_ms: duration, + }, + }); + } + + // If no matches found with the simple pattern, try the JSON parsing approach + // in case the format changes + if (logs.length === 0) { + const lines = stdout.split('\n'); + let currentObject = ''; + let inObject = false; + let braceDepth = 0; + + for (const line of lines) { + if (!inObject && line.trim() === '{') { + inObject = true; + braceDepth = 1; + currentObject = line + '\n'; + } else if (inObject) { + currentObject += line + '\n'; + + // Count braces + for (const char of line) { + if (char === '{') braceDepth++; + else if (char === '}') braceDepth--; + } + + // If we've closed all braces, try to parse the object + if (braceDepth === 0) { + inObject = false; + try { + const obj = JSON.parse(currentObject); + + // Check for tool call in different formats + if ( + obj.body && + obj.body.includes('Tool call:') && + obj.attributes + ) { + const bodyMatch = obj.body.match(/Tool call: (\w+)\./); + if (bodyMatch) { + logs.push({ + timestamp: obj.timestamp || Date.now(), + toolRequest: { + name: bodyMatch[1], + args: obj.attributes.function_args || '{}', + success: obj.attributes.success !== false, + duration_ms: obj.attributes.duration_ms || 0, + }, + }); + } + } else if ( + obj.attributes && + obj.attributes['event.name'] === 'gemini_cli.tool_call' + ) { + logs.push({ + timestamp: obj.attributes['event.timestamp'], + toolRequest: { + name: obj.attributes.function_name, + args: obj.attributes.function_args, + success: obj.attributes.success, + duration_ms: obj.attributes.duration_ms, + }, + }); + } + } catch (_e) { + // Not valid JSON + } + currentObject = ''; + } + } + } + } + + return logs; + } + + readToolLogs() { + // For Podman, first check if telemetry file exists and has content + // If not, fall back to parsing from stdout + if (env.GEMINI_SANDBOX === 'podman') { + // Try reading from file first + const logFilePath = join(this.testDir, 'telemetry.log'); + + if (fileExists(logFilePath)) { + try { + const content = readFileSync(logFilePath, 'utf-8'); + if (content && content.includes('"event.name"')) { + // File has content, use normal file parsing + // Continue to the normal file parsing logic below + } else if (this._lastRunStdout) { + // File exists but is empty or doesn't have events, parse from stdout + return this._parseToolLogsFromStdout(this._lastRunStdout); + } + } catch (_e) { + // Error reading file, fall back to stdout + if (this._lastRunStdout) { + return this._parseToolLogsFromStdout(this._lastRunStdout); + } + } + } else if (this._lastRunStdout) { + // No file exists, parse from stdout + return this._parseToolLogsFromStdout(this._lastRunStdout); + } + } + + // In sandbox mode, telemetry is written to a relative path in the test directory + const logFilePath = + env.GEMINI_SANDBOX && env.GEMINI_SANDBOX !== 'false' + ? join(this.testDir, 'telemetry.log') + : env.TELEMETRY_LOG_FILE; + + if (!logFilePath) { + console.warn(`TELEMETRY_LOG_FILE environment variable not set`); + return []; + } + + // Check if file exists, if not return empty array (file might not be created yet) + if (!fileExists(logFilePath)) { + return []; + } + + const content = readFileSync(logFilePath, 'utf-8'); + + // Split the content into individual JSON objects + // They are separated by "}\n{" pattern + const jsonObjects = content + .split(/}\s*\n\s*{/) + .map((obj, index, array) => { + // Add back the braces we removed during split + if (index > 0) obj = '{' + obj; + if (index < array.length - 1) obj = obj + '}'; + return obj.trim(); + }) + .filter((obj) => obj); + + const logs = []; + + for (const jsonStr of jsonObjects) { + try { + const logData = JSON.parse(jsonStr); + // Look for tool call logs + if ( + logData.attributes && + logData.attributes['event.name'] === 'gemini_cli.tool_call' + ) { + const toolName = logData.attributes.function_name; + logs.push({ + toolRequest: { + name: toolName, + args: logData.attributes.function_args, + success: logData.attributes.success, + duration_ms: logData.attributes.duration_ms, + }, + }); + } + } catch (_e) { + // Skip objects that aren't valid JSON + if (env.VERBOSE === 'true') { + console.error('Failed to parse telemetry object:', _e.message); + } + } + } + + return logs; + } } diff --git a/integration-tests/write_file.test.js b/integration-tests/write_file.test.js index 46a15f3c..7809161e 100644 --- a/integration-tests/write_file.test.js +++ b/integration-tests/write_file.test.js @@ -6,16 +6,63 @@ import { test } from 'node:test'; import { strict as assert } from 'assert'; -import { TestRig } from './test-helper.js'; +import { + TestRig, + createToolCallErrorMessage, + printDebugInfo, + validateModelOutput, +} from './test-helper.js'; -test('should be able to write a file', async (t) => { +test('should be able to write a file', async () => { const rig = new TestRig(); - rig.setup(t.name); + await rig.setup('should be able to write a file'); const prompt = `show me an example of using the write tool. put a dad joke in dad.txt`; - await rig.run(prompt); + const result = await rig.run(prompt); + + const foundToolCall = await rig.waitForToolCall('write_file'); + + // Add debugging information + if (!foundToolCall) { + printDebugInfo(rig, result); + } + + const allTools = rig.readToolLogs(); + assert.ok( + foundToolCall, + createToolCallErrorMessage( + 'write_file', + allTools.map((t) => t.toolRequest.name), + result, + ), + ); + + // Validate model output - will throw if no output, warn if missing expected content + validateModelOutput(result, 'dad.txt', 'Write file test'); + const newFilePath = 'dad.txt'; const newFileContent = rig.readFile(newFilePath); - assert.notEqual(newFileContent, ''); + + // Add debugging for file content + if (newFileContent === '') { + console.error('File was created but is empty'); + console.error( + 'Tool calls:', + rig.readToolLogs().map((t) => ({ + name: t.toolRequest.name, + args: t.toolRequest.args, + })), + ); + } + + assert.notEqual(newFileContent, '', 'Expected file to have content'); + + // Log success info if verbose + if (process.env.VERBOSE === 'true') { + console.log( + 'File created successfully with content:', + newFileContent.substring(0, 100) + '...', + ); + } }); diff --git a/package-lock.json b/package-lock.json index e04410c1..e1bfdc18 100644 --- a/package-lock.json +++ b/package-lock.json @@ -14,6 +14,7 @@ "qwen": "bundle/gemini.js" }, "devDependencies": { + "@types/marked": "^5.0.2", "@types/micromatch": "^4.0.9", "@types/mime-types": "^3.0.1", "@types/minimatch": "^5.1.2", @@ -2423,6 +2424,13 @@ "dev": true, "license": "MIT" }, + "node_modules/@types/marked": { + "version": "5.0.2", + "resolved": "https://registry.npmjs.org/@types/marked/-/marked-5.0.2.tgz", + "integrity": "sha512-OucS4KMHhFzhz27KxmWg7J+kIYqyqoW5kdIEI319hqARQQUTqhao3M/F+uFnDXD0Rg72iDDZxZNxq5gvctmLlg==", + "dev": true, + "license": "MIT" + }, "node_modules/@types/micromatch": { "version": "4.0.9", "resolved": "https://registry.npmjs.org/@types/micromatch/-/micromatch-4.0.9.tgz", @@ -7732,6 +7740,31 @@ "url": "https://github.com/sponsors/sindresorhus" } }, + "node_modules/make-dir/node_modules/semver": { + "version": "7.7.2", + "resolved": "https://registry.npmjs.org/semver/-/semver-7.7.2.tgz", + "integrity": "sha512-RF0Fw+rO5AMf9MAyaRXI4AV0Ulj5lMHqVxxdSgiVbixSCXoEmmX/jk0CuJw4+3SqroYO9VoUh+HcuJivvtJemA==", + "dev": true, + "license": "ISC", + "bin": { + "semver": "bin/semver.js" + }, + "engines": { + "node": ">=10" + } + }, + "node_modules/marked": { + "version": "15.0.12", + "resolved": "https://registry.npmjs.org/marked/-/marked-15.0.12.tgz", + "integrity": "sha512-8dD6FusOQSrpv9Z1rdNMdlSgQOIP880DHqnohobOmYLElGEqAL/JvxvuxZO16r4HtjTlfPRDC1hbvxC9dPN2nA==", + "license": "MIT", + "bin": { + "marked": "bin/marked.js" + }, + "engines": { + "node": ">= 18" + } + }, "node_modules/math-intrinsics": { "version": "1.1.0", "resolved": "https://registry.npmjs.org/math-intrinsics/-/math-intrinsics-1.1.0.tgz", @@ -11864,6 +11897,7 @@ "html-to-text": "^9.0.5", "https-proxy-agent": "^7.0.6", "ignore": "^7.0.0", + "marked": "^15.0.12", "micromatch": "^4.0.8", "open": "^10.1.2", "openai": "^5.7.0", diff --git a/package.json b/package.json index 1db7cef0..929ef9b6 100644 --- a/package.json +++ b/package.json @@ -57,6 +57,7 @@ "LICENSE" ], "devDependencies": { + "@types/marked": "^5.0.2", "@types/micromatch": "^4.0.9", "@types/mime-types": "^3.0.1", "@types/minimatch": "^5.1.2", diff --git a/packages/cli/src/config/config.test.ts b/packages/cli/src/config/config.test.ts index c8e2bd71..0f9d2e2e 100644 --- a/packages/cli/src/config/config.test.ts +++ b/packages/cli/src/config/config.test.ts @@ -35,6 +35,13 @@ vi.mock('@qwen-code/qwen-code-core', async () => { ); return { ...actualServer, + IdeClient: { + getInstance: vi.fn().mockReturnValue({ + getConnectionStatus: vi.fn(), + initialize: vi.fn(), + shutdown: vi.fn(), + }), + }, loadEnvironment: vi.fn(), loadServerHierarchicalMemory: vi.fn( (cwd, debug, fileService, extensionPaths, _maxDirs) => @@ -499,6 +506,7 @@ describe('Hierarchical Memory Loading (config.ts) - Placeholder Suite', () => { '/path/to/ext3/context1.md', '/path/to/ext3/context2.md', ], + 'tree', { respectGitIgnore: false, respectGeminiIgnore: true, @@ -983,7 +991,69 @@ describe('loadCliConfig extensions', () => { }); }); -describe('loadCliConfig ideMode', () => { +describe('loadCliConfig model selection', () => { + it('selects a model from settings.json if provided', async () => { + process.argv = ['node', 'script.js']; + const argv = await parseArguments(); + const config = await loadCliConfig( + { + model: 'qwen3-coder-plus', + }, + [], + 'test-session', + argv, + ); + + expect(config.getModel()).toBe('qwen3-coder-plus'); + }); + + it('uses the default gemini model if nothing is set', async () => { + process.argv = ['node', 'script.js']; // No model set. + const argv = await parseArguments(); + const config = await loadCliConfig( + { + // No model set. + }, + [], + 'test-session', + argv, + ); + + expect(config.getModel()).toBe('qwen3-coder-plus'); + }); + + it('always prefers model from argvs', async () => { + process.argv = ['node', 'script.js', '--model', 'qwen3-coder-plus']; + const argv = await parseArguments(); + const config = await loadCliConfig( + { + model: 'qwen3-coder-plus', + }, + [], + 'test-session', + argv, + ); + + expect(config.getModel()).toBe('qwen3-coder-plus'); + }); + + it('selects the model from argvs if provided', async () => { + process.argv = ['node', 'script.js', '--model', 'qwen3-coder-plus']; + const argv = await parseArguments(); + const config = await loadCliConfig( + { + // No model provided via settings. + }, + [], + 'test-session', + argv, + ); + + expect(config.getModel()).toBe('qwen3-coder-plus'); + }); +}); + +describe('loadCliConfig ideModeFeature', () => { const originalArgv = process.argv; const originalEnv = { ...process.env }; @@ -991,8 +1061,6 @@ describe('loadCliConfig ideMode', () => { vi.resetAllMocks(); vi.mocked(os.homedir).mockReturnValue('/mock/home/user'); process.env.GEMINI_API_KEY = 'test-api-key'; - // Explicitly delete TERM_PROGRAM and SANDBOX before each test - delete process.env.TERM_PROGRAM; delete process.env.SANDBOX; delete process.env.GEMINI_CLI_IDE_SERVER_PORT; }); @@ -1008,81 +1076,16 @@ describe('loadCliConfig ideMode', () => { const settings: Settings = {}; const argv = await parseArguments(); const config = await loadCliConfig(settings, [], 'test-session', argv); - expect(config.getIdeMode()).toBe(false); + expect(config.getIdeModeFeature()).toBe(false); }); - it('should be false if --ide-mode is true but TERM_PROGRAM is not vscode', async () => { - process.argv = ['node', 'script.js', '--ide-mode']; - const settings: Settings = {}; - const argv = await parseArguments(); - const config = await loadCliConfig(settings, [], 'test-session', argv); - expect(config.getIdeMode()).toBe(false); - }); - - it('should be false if settings.ideMode is true but TERM_PROGRAM is not vscode', async () => { - process.argv = ['node', 'script.js']; - const argv = await parseArguments(); - const settings: Settings = { ideMode: true }; - const config = await loadCliConfig(settings, [], 'test-session', argv); - expect(config.getIdeMode()).toBe(false); - }); - - it('should be true when --ide-mode is set and TERM_PROGRAM is vscode', async () => { - process.argv = ['node', 'script.js', '--ide-mode']; - const argv = await parseArguments(); - process.env.TERM_PROGRAM = 'vscode'; - process.env.GEMINI_CLI_IDE_SERVER_PORT = '3000'; - const settings: Settings = {}; - const config = await loadCliConfig(settings, [], 'test-session', argv); - expect(config.getIdeMode()).toBe(true); - }); - - it('should be true when settings.ideMode is true and TERM_PROGRAM is vscode', async () => { - process.argv = ['node', 'script.js']; - const argv = await parseArguments(); - process.env.TERM_PROGRAM = 'vscode'; - process.env.GEMINI_CLI_IDE_SERVER_PORT = '3000'; - const settings: Settings = { ideMode: true }; - const config = await loadCliConfig(settings, [], 'test-session', argv); - expect(config.getIdeMode()).toBe(true); - }); - - it('should prioritize --ide-mode (true) over settings (false) when TERM_PROGRAM is vscode', async () => { - process.argv = ['node', 'script.js', '--ide-mode']; - const argv = await parseArguments(); - process.env.TERM_PROGRAM = 'vscode'; - process.env.GEMINI_CLI_IDE_SERVER_PORT = '3000'; - const settings: Settings = { ideMode: false }; - const config = await loadCliConfig(settings, [], 'test-session', argv); - expect(config.getIdeMode()).toBe(true); - }); - - it('should prioritize --no-ide-mode (false) over settings (true) even when TERM_PROGRAM is vscode', async () => { - process.argv = ['node', 'script.js', '--no-ide-mode']; - const argv = await parseArguments(); - process.env.TERM_PROGRAM = 'vscode'; - const settings: Settings = { ideMode: true }; - const config = await loadCliConfig(settings, [], 'test-session', argv); - expect(config.getIdeMode()).toBe(false); - }); - - it('should be false when --ide-mode is true, TERM_PROGRAM is vscode, but SANDBOX is set', async () => { - process.argv = ['node', 'script.js', '--ide-mode']; - const argv = await parseArguments(); - process.env.TERM_PROGRAM = 'vscode'; - process.env.SANDBOX = 'true'; - const settings: Settings = {}; - const config = await loadCliConfig(settings, [], 'test-session', argv); - expect(config.getIdeMode()).toBe(false); - }); - - it('should be false when settings.ideMode is true, TERM_PROGRAM is vscode, but SANDBOX is set', async () => { + it('should be false when settings.ideModeFeature is true, but SANDBOX is set', async () => { process.argv = ['node', 'script.js']; const argv = await parseArguments(); process.env.TERM_PROGRAM = 'vscode'; process.env.SANDBOX = 'true'; - const settings: Settings = { ideMode: true }; + const settings: Settings = { ideModeFeature: true }; const config = await loadCliConfig(settings, [], 'test-session', argv); - expect(config.getIdeMode()).toBe(false); + expect(config.getIdeModeFeature()).toBe(false); }); }); diff --git a/packages/cli/src/config/config.ts b/packages/cli/src/config/config.ts index 5e6652e8..803b21f3 100644 --- a/packages/cli/src/config/config.ts +++ b/packages/cli/src/config/config.ts @@ -4,6 +4,9 @@ * SPDX-License-Identifier: Apache-2.0 */ +import * as fs from 'fs'; +import * as path from 'path'; +import { homedir } from 'node:os'; import yargs from 'yargs/yargs'; import { hideBin } from 'yargs/helpers'; import process from 'node:process'; @@ -59,11 +62,12 @@ export interface CliArgs { experimentalAcp: boolean | undefined; extensions: string[] | undefined; listExtensions: boolean | undefined; - ideMode: boolean | undefined; + ideModeFeature: boolean | undefined; openaiLogging: boolean | undefined; openaiApiKey: string | undefined; openaiBaseUrl: string | undefined; proxy: string | undefined; + includeDirectories: string[] | undefined; } export async function parseArguments(): Promise { @@ -77,7 +81,7 @@ export async function parseArguments(): Promise { alias: 'm', type: 'string', description: `Model`, - default: process.env.GEMINI_MODEL || DEFAULT_GEMINI_MODEL, + default: process.env.GEMINI_MODEL, }) .option('prompt', { alias: 'p', @@ -193,7 +197,7 @@ export async function parseArguments(): Promise { type: 'boolean', description: 'List all available extensions and exit.', }) - .option('ide-mode', { + .option('ide-mode-feature', { type: 'boolean', description: 'Run in IDE mode?', }) @@ -215,6 +219,15 @@ export async function parseArguments(): Promise { description: 'Proxy for gemini client, like schema://user:password@host:port', }) + .option('include-directories', { + type: 'array', + string: true, + description: + 'Additional directories to include in the workspace (comma-separated or multiple --include-directories)', + coerce: (dirs: string[]) => + // Handle comma-separated values + dirs.flatMap((dir) => dir.split(',').map((d) => d.trim())), + }) .version(await getCliVersion()) // This will enable the --version flag based on package.json .alias('v', 'version') .help() @@ -230,7 +243,11 @@ export async function parseArguments(): Promise { }); yargsInstance.wrap(yargsInstance.terminalWidth()); - return yargsInstance.argv; + const result = yargsInstance.parseSync(); + + // The import format is now only controlled by settings.memoryImportFormat + // We no longer accept it as a CLI argument + return result as CliArgs; } // This function is now a thin wrapper around the server's implementation. @@ -242,21 +259,31 @@ export async function loadHierarchicalGeminiMemory( fileService: FileDiscoveryService, settings: Settings, extensionContextFilePaths: string[] = [], + memoryImportFormat: 'flat' | 'tree' = 'tree', fileFilteringOptions?: FileFilteringOptions, ): Promise<{ memoryContent: string; fileCount: number }> { + // FIX: Use real, canonical paths for a reliable comparison to handle symlinks. + const realCwd = fs.realpathSync(path.resolve(currentWorkingDirectory)); + const realHome = fs.realpathSync(path.resolve(homedir())); + const isHomeDirectory = realCwd === realHome; + + // If it is the home directory, pass an empty string to the core memory + // function to signal that it should skip the workspace search. + const effectiveCwd = isHomeDirectory ? '' : currentWorkingDirectory; + if (debugMode) { logger.debug( - `CLI: Delegating hierarchical memory load to server for CWD: ${currentWorkingDirectory}`, + `CLI: Delegating hierarchical memory load to server for CWD: ${currentWorkingDirectory} (memoryImportFormat: ${memoryImportFormat})`, ); } - // Directly call the server function. - // The server function will use its own homedir() for the global path. + // Directly call the server function with the corrected path. return loadServerHierarchicalMemory( - currentWorkingDirectory, + effectiveCwd, debugMode, fileService, extensionContextFilePaths, + memoryImportFormat, fileFilteringOptions, settings.memoryDiscoveryMaxDirs, ); @@ -272,17 +299,16 @@ export async function loadCliConfig( argv.debug || [process.env.DEBUG, process.env.DEBUG_MODE].some( (v) => v === 'true' || v === '1', - ); + ) || + false; + const memoryImportFormat = settings.memoryImportFormat || 'tree'; + const ideMode = settings.ideMode ?? false; - const ideMode = - (argv.ideMode ?? settings.ideMode ?? false) && - process.env.TERM_PROGRAM === 'vscode' && + const ideModeFeature = + (argv.ideModeFeature ?? settings.ideModeFeature ?? false) && !process.env.SANDBOX; - let ideClient: IdeClient | undefined; - if (ideMode) { - ideClient = new IdeClient(); - } + const ideClient = IdeClient.getInstance(ideMode && ideModeFeature); const allExtensions = annotateActiveExtensions( extensions, @@ -331,6 +357,7 @@ export async function loadCliConfig( fileService, settings, extensionContextFilePaths, + memoryImportFormat, fileFiltering, ); @@ -391,6 +418,7 @@ export async function loadCliConfig( embeddingModel: DEFAULT_GEMINI_EMBEDDING_MODEL, sandbox: sandboxConfig, targetDir: process.cwd(), + includeDirectories: argv.includeDirectories, debugMode, question: argv.promptInteractive || argv.prompt || '', fullContext: argv.allFiles || argv.all_files || false, @@ -438,7 +466,7 @@ export async function loadCliConfig( cwd: process.cwd(), fileDiscoveryService: fileService, bugCommand: settings.bugCommand, - model: argv.model!, + model: argv.model || settings.model || DEFAULT_GEMINI_MODEL, extensionContextFilePaths, maxSessionTurns: settings.maxSessionTurns ?? -1, sessionTokenLimit: settings.sessionTokenLimit ?? 32000, @@ -450,6 +478,7 @@ export async function loadCliConfig( noBrowser: !!process.env.NO_BROWSER, summarizeToolOutput: settings.summarizeToolOutput, ideMode, + ideModeFeature, ideClient, enableOpenAILogging: (typeof argv.openaiLogging === 'undefined' diff --git a/packages/cli/src/config/extension.test.ts b/packages/cli/src/config/extension.test.ts index 1ee46d4c..9f1aec98 100644 --- a/packages/cli/src/config/extension.test.ts +++ b/packages/cli/src/config/extension.test.ts @@ -42,6 +42,81 @@ describe('loadExtensions', () => { fs.rmSync(tempHomeDir, { recursive: true, force: true }); }); + it('should include extension path in loaded extension', () => { + const workspaceExtensionsDir = path.join( + tempWorkspaceDir, + EXTENSIONS_DIRECTORY_NAME, + ); + fs.mkdirSync(workspaceExtensionsDir, { recursive: true }); + + const extensionDir = path.join(workspaceExtensionsDir, 'test-extension'); + fs.mkdirSync(extensionDir, { recursive: true }); + + const config = { + name: 'test-extension', + version: '1.0.0', + }; + fs.writeFileSync( + path.join(extensionDir, EXTENSIONS_CONFIG_FILENAME), + JSON.stringify(config), + ); + + const extensions = loadExtensions(tempWorkspaceDir); + expect(extensions).toHaveLength(1); + expect(extensions[0].path).toBe(extensionDir); + expect(extensions[0].config.name).toBe('test-extension'); + }); + + it('should include extension path in loaded extension', () => { + const workspaceExtensionsDir = path.join( + tempWorkspaceDir, + EXTENSIONS_DIRECTORY_NAME, + ); + fs.mkdirSync(workspaceExtensionsDir, { recursive: true }); + + const extensionDir = path.join(workspaceExtensionsDir, 'test-extension'); + fs.mkdirSync(extensionDir, { recursive: true }); + + const config = { + name: 'test-extension', + version: '1.0.0', + }; + fs.writeFileSync( + path.join(extensionDir, EXTENSIONS_CONFIG_FILENAME), + JSON.stringify(config), + ); + + const extensions = loadExtensions(tempWorkspaceDir); + expect(extensions).toHaveLength(1); + expect(extensions[0].path).toBe(extensionDir); + expect(extensions[0].config.name).toBe('test-extension'); + }); + + it('should include extension path in loaded extension', () => { + const workspaceExtensionsDir = path.join( + tempWorkspaceDir, + EXTENSIONS_DIRECTORY_NAME, + ); + fs.mkdirSync(workspaceExtensionsDir, { recursive: true }); + + const extensionDir = path.join(workspaceExtensionsDir, 'test-extension'); + fs.mkdirSync(extensionDir, { recursive: true }); + + const config = { + name: 'test-extension', + version: '1.0.0', + }; + fs.writeFileSync( + path.join(extensionDir, EXTENSIONS_CONFIG_FILENAME), + JSON.stringify(config), + ); + + const extensions = loadExtensions(tempWorkspaceDir); + expect(extensions).toHaveLength(1); + expect(extensions[0].path).toBe(extensionDir); + expect(extensions[0].config.name).toBe('test-extension'); + }); + it('should load context file path when QWEN.md is present', () => { const workspaceExtensionsDir = path.join( tempWorkspaceDir, diff --git a/packages/cli/src/config/extension.ts b/packages/cli/src/config/extension.ts index 1c77ff04..cdd9bdd7 100644 --- a/packages/cli/src/config/extension.ts +++ b/packages/cli/src/config/extension.ts @@ -13,6 +13,7 @@ export const EXTENSIONS_DIRECTORY_NAME = path.join('.qwen', 'extensions'); export const EXTENSIONS_CONFIG_FILENAME = 'gemini-extension.json'; export interface Extension { + path: string; config: ExtensionConfig; contextFiles: string[]; } @@ -90,6 +91,7 @@ function loadExtension(extensionDir: string): Extension | null { .filter((contextFilePath) => fs.existsSync(contextFilePath)); return { + path: extensionDir, config, contextFiles, }; @@ -121,6 +123,7 @@ export function annotateActiveExtensions( name: extension.config.name, version: extension.config.version, isActive: true, + path: extension.path, })); } @@ -136,6 +139,7 @@ export function annotateActiveExtensions( name: extension.config.name, version: extension.config.version, isActive: false, + path: extension.path, })); } @@ -153,6 +157,7 @@ export function annotateActiveExtensions( name: extension.config.name, version: extension.config.version, isActive, + path: extension.path, }); } diff --git a/packages/cli/src/config/settings.test.ts b/packages/cli/src/config/settings.test.ts index ae655fe1..4099e778 100644 --- a/packages/cli/src/config/settings.test.ts +++ b/packages/cli/src/config/settings.test.ts @@ -59,7 +59,21 @@ const MOCK_WORKSPACE_SETTINGS_PATH = pathActual.join( 'settings.json', ); -vi.mock('fs'); +vi.mock('fs', async (importOriginal) => { + // Get all the functions from the real 'fs' module + const actualFs = await importOriginal(); + + return { + ...actualFs, // Keep all the real functions + // Now, just override the ones we need for the test + existsSync: vi.fn(), + readFileSync: vi.fn(), + writeFileSync: vi.fn(), + mkdirSync: vi.fn(), + realpathSync: (p: string) => p, + }; +}); + vi.mock('strip-json-comments', () => ({ default: vi.fn((content) => content), })); @@ -320,6 +334,86 @@ describe('Settings Loading and Merging', () => { expect(settings.merged.contextFileName).toBe('PROJECT_SPECIFIC.md'); }); + it('should handle excludedProjectEnvVars correctly when only in user settings', () => { + (mockFsExistsSync as Mock).mockImplementation( + (p: fs.PathLike) => p === USER_SETTINGS_PATH, + ); + const userSettingsContent = { + excludedProjectEnvVars: ['DEBUG', 'NODE_ENV', 'CUSTOM_VAR'], + }; + (fs.readFileSync as Mock).mockImplementation( + (p: fs.PathOrFileDescriptor) => { + if (p === USER_SETTINGS_PATH) + return JSON.stringify(userSettingsContent); + return ''; + }, + ); + + const settings = loadSettings(MOCK_WORKSPACE_DIR); + expect(settings.merged.excludedProjectEnvVars).toEqual([ + 'DEBUG', + 'NODE_ENV', + 'CUSTOM_VAR', + ]); + }); + + it('should handle excludedProjectEnvVars correctly when only in workspace settings', () => { + (mockFsExistsSync as Mock).mockImplementation( + (p: fs.PathLike) => p === MOCK_WORKSPACE_SETTINGS_PATH, + ); + const workspaceSettingsContent = { + excludedProjectEnvVars: ['WORKSPACE_DEBUG', 'WORKSPACE_VAR'], + }; + (fs.readFileSync as Mock).mockImplementation( + (p: fs.PathOrFileDescriptor) => { + if (p === MOCK_WORKSPACE_SETTINGS_PATH) + return JSON.stringify(workspaceSettingsContent); + return ''; + }, + ); + + const settings = loadSettings(MOCK_WORKSPACE_DIR); + expect(settings.merged.excludedProjectEnvVars).toEqual([ + 'WORKSPACE_DEBUG', + 'WORKSPACE_VAR', + ]); + }); + + it('should merge excludedProjectEnvVars with workspace taking precedence over user', () => { + (mockFsExistsSync as Mock).mockReturnValue(true); + const userSettingsContent = { + excludedProjectEnvVars: ['DEBUG', 'NODE_ENV', 'USER_VAR'], + }; + const workspaceSettingsContent = { + excludedProjectEnvVars: ['WORKSPACE_DEBUG', 'WORKSPACE_VAR'], + }; + + (fs.readFileSync as Mock).mockImplementation( + (p: fs.PathOrFileDescriptor) => { + if (p === USER_SETTINGS_PATH) + return JSON.stringify(userSettingsContent); + if (p === MOCK_WORKSPACE_SETTINGS_PATH) + return JSON.stringify(workspaceSettingsContent); + return ''; + }, + ); + + const settings = loadSettings(MOCK_WORKSPACE_DIR); + expect(settings.user.settings.excludedProjectEnvVars).toEqual([ + 'DEBUG', + 'NODE_ENV', + 'USER_VAR', + ]); + expect(settings.workspace.settings.excludedProjectEnvVars).toEqual([ + 'WORKSPACE_DEBUG', + 'WORKSPACE_VAR', + ]); + expect(settings.merged.excludedProjectEnvVars).toEqual([ + 'WORKSPACE_DEBUG', + 'WORKSPACE_VAR', + ]); + }); + it('should default contextFileName to undefined if not in any settings file', () => { (mockFsExistsSync as Mock).mockReturnValue(true); const userSettingsContent = { theme: 'dark' }; @@ -777,6 +871,48 @@ describe('Settings Loading and Merging', () => { } }); + it('should correctly merge dnsResolutionOrder with workspace taking precedence', () => { + (mockFsExistsSync as Mock).mockReturnValue(true); + const userSettingsContent = { + dnsResolutionOrder: 'ipv4first', + }; + const workspaceSettingsContent = { + dnsResolutionOrder: 'verbatim', + }; + + (fs.readFileSync as Mock).mockImplementation( + (p: fs.PathOrFileDescriptor) => { + if (p === USER_SETTINGS_PATH) + return JSON.stringify(userSettingsContent); + if (p === MOCK_WORKSPACE_SETTINGS_PATH) + return JSON.stringify(workspaceSettingsContent); + return '{}'; + }, + ); + + const settings = loadSettings(MOCK_WORKSPACE_DIR); + expect(settings.merged.dnsResolutionOrder).toBe('verbatim'); + }); + + it('should use user dnsResolutionOrder if workspace is not defined', () => { + (mockFsExistsSync as Mock).mockImplementation( + (p: fs.PathLike) => p === USER_SETTINGS_PATH, + ); + const userSettingsContent = { + dnsResolutionOrder: 'verbatim', + }; + (fs.readFileSync as Mock).mockImplementation( + (p: fs.PathOrFileDescriptor) => { + if (p === USER_SETTINGS_PATH) + return JSON.stringify(userSettingsContent); + return '{}'; + }, + ); + + const settings = loadSettings(MOCK_WORKSPACE_DIR); + expect(settings.merged.dnsResolutionOrder).toBe('verbatim'); + }); + it('should leave unresolved environment variables as is', () => { const userSettingsContent = { apiKey: '$UNDEFINED_VAR' }; (mockFsExistsSync as Mock).mockImplementation( @@ -999,4 +1135,140 @@ describe('Settings Loading and Merging', () => { expect(loadedSettings.merged.theme).toBe('ocean'); }); }); + + describe('excludedProjectEnvVars integration', () => { + const originalEnv = { ...process.env }; + + beforeEach(() => { + process.env = { ...originalEnv }; + }); + + afterEach(() => { + process.env = originalEnv; + }); + + it('should exclude DEBUG and DEBUG_MODE from project .env files by default', () => { + // Create a workspace settings file with excludedProjectEnvVars + const workspaceSettingsContent = { + excludedProjectEnvVars: ['DEBUG', 'DEBUG_MODE'], + }; + + (mockFsExistsSync as Mock).mockImplementation( + (p: fs.PathLike) => p === MOCK_WORKSPACE_SETTINGS_PATH, + ); + + (fs.readFileSync as Mock).mockImplementation( + (p: fs.PathOrFileDescriptor) => { + if (p === MOCK_WORKSPACE_SETTINGS_PATH) + return JSON.stringify(workspaceSettingsContent); + return '{}'; + }, + ); + + // Mock findEnvFile to return a project .env file + const originalFindEnvFile = ( + loadSettings as unknown as { findEnvFile: () => string } + ).findEnvFile; + (loadSettings as unknown as { findEnvFile: () => string }).findEnvFile = + () => '/mock/project/.env'; + + // Mock fs.readFileSync for .env file content + const originalReadFileSync = fs.readFileSync; + (fs.readFileSync as Mock).mockImplementation( + (p: fs.PathOrFileDescriptor) => { + if (p === '/mock/project/.env') { + return 'DEBUG=true\nDEBUG_MODE=1\nGEMINI_API_KEY=test-key'; + } + if (p === MOCK_WORKSPACE_SETTINGS_PATH) { + return JSON.stringify(workspaceSettingsContent); + } + return '{}'; + }, + ); + + try { + // This will call loadEnvironment internally with the merged settings + const settings = loadSettings(MOCK_WORKSPACE_DIR); + + // Verify the settings were loaded correctly + expect(settings.merged.excludedProjectEnvVars).toEqual([ + 'DEBUG', + 'DEBUG_MODE', + ]); + + // Note: We can't directly test process.env changes here because the mocking + // prevents the actual file system operations, but we can verify the settings + // are correctly merged and passed to loadEnvironment + } finally { + (loadSettings as unknown as { findEnvFile: () => string }).findEnvFile = + originalFindEnvFile; + (fs.readFileSync as Mock).mockImplementation(originalReadFileSync); + } + }); + + it('should respect custom excludedProjectEnvVars from user settings', () => { + const userSettingsContent = { + excludedProjectEnvVars: ['NODE_ENV', 'DEBUG'], + }; + + (mockFsExistsSync as Mock).mockImplementation( + (p: fs.PathLike) => p === USER_SETTINGS_PATH, + ); + + (fs.readFileSync as Mock).mockImplementation( + (p: fs.PathOrFileDescriptor) => { + if (p === USER_SETTINGS_PATH) + return JSON.stringify(userSettingsContent); + return '{}'; + }, + ); + + const settings = loadSettings(MOCK_WORKSPACE_DIR); + expect(settings.user.settings.excludedProjectEnvVars).toEqual([ + 'NODE_ENV', + 'DEBUG', + ]); + expect(settings.merged.excludedProjectEnvVars).toEqual([ + 'NODE_ENV', + 'DEBUG', + ]); + }); + + it('should merge excludedProjectEnvVars with workspace taking precedence', () => { + const userSettingsContent = { + excludedProjectEnvVars: ['DEBUG', 'NODE_ENV', 'USER_VAR'], + }; + const workspaceSettingsContent = { + excludedProjectEnvVars: ['WORKSPACE_DEBUG', 'WORKSPACE_VAR'], + }; + + (mockFsExistsSync as Mock).mockReturnValue(true); + + (fs.readFileSync as Mock).mockImplementation( + (p: fs.PathOrFileDescriptor) => { + if (p === USER_SETTINGS_PATH) + return JSON.stringify(userSettingsContent); + if (p === MOCK_WORKSPACE_SETTINGS_PATH) + return JSON.stringify(workspaceSettingsContent); + return '{}'; + }, + ); + + const settings = loadSettings(MOCK_WORKSPACE_DIR); + + expect(settings.user.settings.excludedProjectEnvVars).toEqual([ + 'DEBUG', + 'NODE_ENV', + 'USER_VAR', + ]); + expect(settings.workspace.settings.excludedProjectEnvVars).toEqual([ + 'WORKSPACE_DEBUG', + 'WORKSPACE_VAR', + ]); + expect(settings.merged.excludedProjectEnvVars).toEqual([ + 'WORKSPACE_DEBUG', + 'WORKSPACE_VAR', + ]); + }); + }); }); diff --git a/packages/cli/src/config/settings.ts b/packages/cli/src/config/settings.ts index 31de5004..684637a7 100644 --- a/packages/cli/src/config/settings.ts +++ b/packages/cli/src/config/settings.ts @@ -24,6 +24,7 @@ import { CustomTheme } from '../ui/themes/theme.js'; export const SETTINGS_DIRECTORY_NAME = '.qwen'; export const USER_SETTINGS_DIR = path.join(homedir(), SETTINGS_DIRECTORY_NAME); export const USER_SETTINGS_PATH = path.join(USER_SETTINGS_DIR, 'settings.json'); +export const DEFAULT_EXCLUDED_ENV_VARS = ['DEBUG', 'DEBUG_MODE']; export function getSystemSettingsPath(): string { if (process.env.GEMINI_CLI_SYSTEM_SETTINGS_PATH) { @@ -38,6 +39,12 @@ export function getSystemSettingsPath(): string { } } +export function getWorkspaceSettingsPath(workspaceDir: string): string { + return path.join(workspaceDir, SETTINGS_DIRECTORY_NAME, 'settings.json'); +} + +export type DnsResolutionOrder = 'ipv4first' | 'verbatim'; + export enum SettingScope { User = 'User', Workspace = 'Workspace', @@ -60,6 +67,7 @@ export interface Settings { theme?: string; customThemes?: Record; selectedAuthType?: AuthType; + useExternalAuth?: boolean; sandbox?: boolean | string; coreTools?: string[]; excludeTools?: string[]; @@ -78,6 +86,8 @@ export interface Settings { bugCommand?: BugCommandSettings; checkpointing?: CheckpointingSettings; autoConfigureMaxOldSpaceSize?: boolean; + /** The model name to use (e.g 'gemini-9.0-pro') */ + model?: string; enableOpenAILogging?: boolean; // Git-aware file filtering settings @@ -105,10 +115,23 @@ export interface Settings { summarizeToolOutput?: Record; vimMode?: boolean; + memoryImportFormat?: 'tree' | 'flat'; - // Add other settings here. + // Flag to be removed post-launch. + ideModeFeature?: boolean; + /// IDE mode setting configured via slash command toggle. ideMode?: boolean; + + // Setting for disabling auto-update. + disableAutoUpdate?: boolean; + + // Setting for disabling the update nag message. + disableUpdateNag?: boolean; + memoryDiscoveryMaxDirs?: number; + // Environment variables to exclude from project .env files + excludedProjectEnvVars?: string[]; + dnsResolutionOrder?: DnsResolutionOrder; sampling_params?: Record; systemPromptMappings?: Array<{ baseUrls: string[]; @@ -295,15 +318,61 @@ export function setUpCloudShellEnvironment(envFilePath: string | null): void { } } -export function loadEnvironment(): void { +export function loadEnvironment(settings?: Settings): void { const envFilePath = findEnvFile(process.cwd()); + // Cloud Shell environment variable handling if (process.env.CLOUD_SHELL === 'true') { setUpCloudShellEnvironment(envFilePath); } + // If no settings provided, try to load workspace settings for exclusions + let resolvedSettings = settings; + if (!resolvedSettings) { + const workspaceSettingsPath = getWorkspaceSettingsPath(process.cwd()); + try { + if (fs.existsSync(workspaceSettingsPath)) { + const workspaceContent = fs.readFileSync( + workspaceSettingsPath, + 'utf-8', + ); + const parsedWorkspaceSettings = JSON.parse( + stripJsonComments(workspaceContent), + ) as Settings; + resolvedSettings = resolveEnvVarsInObject(parsedWorkspaceSettings); + } + } catch (_e) { + // Ignore errors loading workspace settings + } + } + if (envFilePath) { - dotenv.config({ path: envFilePath, quiet: true }); + // Manually parse and load environment variables to handle exclusions correctly. + // This avoids modifying environment variables that were already set from the shell. + try { + const envFileContent = fs.readFileSync(envFilePath, 'utf-8'); + const parsedEnv = dotenv.parse(envFileContent); + + const excludedVars = + resolvedSettings?.excludedProjectEnvVars || DEFAULT_EXCLUDED_ENV_VARS; + const isProjectEnvFile = !envFilePath.includes(GEMINI_DIR); + + for (const key in parsedEnv) { + if (Object.hasOwn(parsedEnv, key)) { + // If it's a project .env file, skip loading excluded variables. + if (isProjectEnvFile && excludedVars.includes(key)) { + continue; + } + + // Load variable only if it's not already set in the environment. + if (!Object.hasOwn(process.env, key)) { + process.env[key] = parsedEnv[key]; + } + } + } + } catch (_e) { + // Errors are ignored to match the behavior of `dotenv.config({ quiet: true })`. + } } } @@ -312,12 +381,29 @@ export function loadEnvironment(): void { * Project settings override user settings. */ export function loadSettings(workspaceDir: string): LoadedSettings { - loadEnvironment(); let systemSettings: Settings = {}; let userSettings: Settings = {}; let workspaceSettings: Settings = {}; const settingsErrors: SettingsError[] = []; const systemSettingsPath = getSystemSettingsPath(); + + // FIX: Resolve paths to their canonical representation to handle symlinks + const resolvedWorkspaceDir = path.resolve(workspaceDir); + const resolvedHomeDir = path.resolve(homedir()); + + let realWorkspaceDir = resolvedWorkspaceDir; + try { + // fs.realpathSync gets the "true" path, resolving any symlinks + realWorkspaceDir = fs.realpathSync(resolvedWorkspaceDir); + } catch (_e) { + // This is okay. The path might not exist yet, and that's a valid state. + } + + // We expect homedir to always exist and be resolvable. + const realHomeDir = fs.realpathSync(resolvedHomeDir); + + const workspaceSettingsPath = getWorkspaceSettingsPath(workspaceDir); + // Load system settings try { if (fs.existsSync(systemSettingsPath)) { @@ -356,37 +442,35 @@ export function loadSettings(workspaceDir: string): LoadedSettings { }); } - const workspaceSettingsPath = path.join( - workspaceDir, - SETTINGS_DIRECTORY_NAME, - 'settings.json', - ); - - // Load workspace settings - try { - if (fs.existsSync(workspaceSettingsPath)) { - const projectContent = fs.readFileSync(workspaceSettingsPath, 'utf-8'); - const parsedWorkspaceSettings = JSON.parse( - stripJsonComments(projectContent), - ) as Settings; - workspaceSettings = resolveEnvVarsInObject(parsedWorkspaceSettings); - if (workspaceSettings.theme && workspaceSettings.theme === 'VS') { - workspaceSettings.theme = DefaultLight.name; - } else if ( - workspaceSettings.theme && - workspaceSettings.theme === 'VS2015' - ) { - workspaceSettings.theme = DefaultDark.name; + // This comparison is now much more reliable. + if (realWorkspaceDir !== realHomeDir) { + // Load workspace settings + try { + if (fs.existsSync(workspaceSettingsPath)) { + const projectContent = fs.readFileSync(workspaceSettingsPath, 'utf-8'); + const parsedWorkspaceSettings = JSON.parse( + stripJsonComments(projectContent), + ) as Settings; + workspaceSettings = resolveEnvVarsInObject(parsedWorkspaceSettings); + if (workspaceSettings.theme && workspaceSettings.theme === 'VS') { + workspaceSettings.theme = DefaultLight.name; + } else if ( + workspaceSettings.theme && + workspaceSettings.theme === 'VS2015' + ) { + workspaceSettings.theme = DefaultDark.name; + } } + } catch (error: unknown) { + settingsErrors.push({ + message: getErrorMessage(error), + path: workspaceSettingsPath, + }); } - } catch (error: unknown) { - settingsErrors.push({ - message: getErrorMessage(error), - path: workspaceSettingsPath, - }); } - return new LoadedSettings( + // Create LoadedSettings first + const loadedSettings = new LoadedSettings( { path: systemSettingsPath, settings: systemSettings, @@ -401,6 +485,11 @@ export function loadSettings(workspaceDir: string): LoadedSettings { }, settingsErrors, ); + + // Load environment with merged settings + loadEnvironment(loadedSettings.merged); + + return loadedSettings; } export function saveSettings(settingsFile: SettingsFile): void { diff --git a/packages/cli/src/gemini.test.tsx b/packages/cli/src/gemini.test.tsx index 505841c7..c8bb45ab 100644 --- a/packages/cli/src/gemini.test.tsx +++ b/packages/cli/src/gemini.test.tsx @@ -6,7 +6,11 @@ import stripAnsi from 'strip-ansi'; import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest'; -import { main, setupUnhandledRejectionHandler } from './gemini.js'; +import { + main, + setupUnhandledRejectionHandler, + validateDnsResolutionOrder, +} from './gemini.js'; import { LoadedSettings, SettingsFile, @@ -211,3 +215,38 @@ describe('gemini.tsx main function', () => { processExitSpy.mockRestore(); }); }); + +describe('validateDnsResolutionOrder', () => { + let consoleWarnSpy: ReturnType; + + beforeEach(() => { + consoleWarnSpy = vi.spyOn(console, 'warn').mockImplementation(() => {}); + }); + + afterEach(() => { + consoleWarnSpy.mockRestore(); + }); + + it('should return "ipv4first" when the input is "ipv4first"', () => { + expect(validateDnsResolutionOrder('ipv4first')).toBe('ipv4first'); + expect(consoleWarnSpy).not.toHaveBeenCalled(); + }); + + it('should return "verbatim" when the input is "verbatim"', () => { + expect(validateDnsResolutionOrder('verbatim')).toBe('verbatim'); + expect(consoleWarnSpy).not.toHaveBeenCalled(); + }); + + it('should return the default "ipv4first" when the input is undefined', () => { + expect(validateDnsResolutionOrder(undefined)).toBe('ipv4first'); + expect(consoleWarnSpy).not.toHaveBeenCalled(); + }); + + it('should return the default "ipv4first" and log a warning for an invalid string', () => { + expect(validateDnsResolutionOrder('invalid-value')).toBe('ipv4first'); + expect(consoleWarnSpy).toHaveBeenCalledOnce(); + expect(consoleWarnSpy).toHaveBeenCalledWith( + 'Invalid value for dnsResolutionOrder in settings: "invalid-value". Using default "ipv4first".', + ); + }); +}); diff --git a/packages/cli/src/gemini.tsx b/packages/cli/src/gemini.tsx index 1b28ec12..e9e0420b 100644 --- a/packages/cli/src/gemini.tsx +++ b/packages/cli/src/gemini.tsx @@ -12,9 +12,11 @@ import { readStdin } from './utils/readStdin.js'; import { basename } from 'node:path'; import v8 from 'node:v8'; import os from 'node:os'; +import dns from 'node:dns'; import { spawn } from 'node:child_process'; import { start_sandbox } from './utils/sandbox.js'; import { + DnsResolutionOrder, LoadedSettings, loadSettings, SettingScope, @@ -40,8 +42,27 @@ import { import { validateAuthMethod } from './config/auth.js'; import { setMaxSizedBoxDebugging } from './ui/components/shared/MaxSizedBox.js'; import { validateNonInteractiveAuth } from './validateNonInterActiveAuth.js'; +import { checkForUpdates } from './ui/utils/updateCheck.js'; +import { handleAutoUpdate } from './utils/handleAutoUpdate.js'; import { appEvents, AppEvent } from './utils/events.js'; +export function validateDnsResolutionOrder( + order: string | undefined, +): DnsResolutionOrder { + const defaultValue: DnsResolutionOrder = 'ipv4first'; + if (order === undefined) { + return defaultValue; + } + if (order === 'ipv4first' || order === 'verbatim') { + return order; + } + // We don't want to throw here, just warn and use the default. + console.warn( + `Invalid value for dnsResolutionOrder in settings: "${order}". Using default "${defaultValue}".`, + ); + return defaultValue; +} + function getNodeMemoryArgs(config: Config): string[] { const totalMemoryMB = os.totalmem() / (1024 * 1024); const heapStats = v8.getHeapStatistics(); @@ -136,6 +157,10 @@ export async function main() { argv, ); + dns.setDefaultResultOrder( + validateDnsResolutionOrder(settings.merged.dnsResolutionOrder), + ); + if (argv.promptInteractive && !process.stdin.isTTY) { console.error( 'Error: The --prompt-interactive flag is not supported when piping input from stdin.', @@ -184,7 +209,10 @@ export async function main() { : []; const sandboxConfig = config.getSandbox(); if (sandboxConfig) { - if (settings.merged.selectedAuthType) { + if ( + settings.merged.selectedAuthType && + !settings.merged.useExternalAuth + ) { // Validate authentication here because the sandbox will interfere with the Oauth2 web redirect. try { const err = validateAuthMethod(settings.merged.selectedAuthType); @@ -197,7 +225,7 @@ export async function main() { process.exit(1); } } - await start_sandbox(sandboxConfig, memoryArgs); + await start_sandbox(sandboxConfig, memoryArgs, config); process.exit(0); } else { // Not in a sandbox and not entering one, so relaunch with additional @@ -246,6 +274,17 @@ export async function main() { { exitOnCtrlC: false }, ); + checkForUpdates() + .then((info) => { + handleAutoUpdate(info, settings, config.getProjectRoot()); + }) + .catch((err) => { + // Silently ignore update check errors. + if (config.getDebugMode()) { + console.error('Update check failed:', err); + } + }); + registerCleanup(() => instance.unmount()); return; } @@ -331,6 +370,7 @@ async function loadNonInteractiveConfig( return await validateNonInteractiveAuth( settings.merged.selectedAuthType, + settings.merged.useExternalAuth, finalConfig, ); } diff --git a/packages/cli/src/nonInteractiveCli.test.ts b/packages/cli/src/nonInteractiveCli.test.ts index 6c37efb8..288b743a 100644 --- a/packages/cli/src/nonInteractiveCli.test.ts +++ b/packages/cli/src/nonInteractiveCli.test.ts @@ -4,196 +4,169 @@ * SPDX-License-Identifier: Apache-2.0 */ -/* eslint-disable @typescript-eslint/no-explicit-any */ -import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest'; +import { + Config, + executeToolCall, + ToolRegistry, + ToolErrorType, + shutdownTelemetry, + GeminiEventType, + ServerGeminiStreamEvent, +} from '@qwen-code/qwen-code-core'; +import { Part } from '@google/genai'; import { runNonInteractive } from './nonInteractiveCli.js'; -import { Config, GeminiClient, ToolRegistry } from '@qwen-code/qwen-code-core'; -import { GenerateContentResponse, Part, FunctionCall } from '@google/genai'; +import { vi } from 'vitest'; -// Mock dependencies -vi.mock('@qwen-code/qwen-code-core', async () => { - const actualCore = await vi.importActual< - typeof import('@qwen-code/qwen-code-core') - >('@qwen-code/qwen-code-core'); +// Mock core modules +vi.mock('@qwen-code/qwen-code-core', async (importOriginal) => { + const original = + await importOriginal(); return { - ...actualCore, - GeminiClient: vi.fn(), - ToolRegistry: vi.fn(), + ...original, executeToolCall: vi.fn(), + shutdownTelemetry: vi.fn(), + isTelemetrySdkInitialized: vi.fn().mockReturnValue(true), }; }); describe('runNonInteractive', () => { let mockConfig: Config; - let mockGeminiClient: GeminiClient; let mockToolRegistry: ToolRegistry; - let mockChat: { - sendMessageStream: ReturnType; + let mockCoreExecuteToolCall: vi.Mock; + let mockShutdownTelemetry: vi.Mock; + let consoleErrorSpy: vi.SpyInstance; + let processExitSpy: vi.SpyInstance; + let processStdoutSpy: vi.SpyInstance; + let mockGeminiClient: { + sendMessageStream: vi.Mock; }; - let mockProcessStdoutWrite: ReturnType; - let mockProcessExit: ReturnType; beforeEach(() => { - vi.resetAllMocks(); - mockChat = { - sendMessageStream: vi.fn(), - }; - mockGeminiClient = { - getChat: vi.fn().mockResolvedValue(mockChat), - } as unknown as GeminiClient; + mockCoreExecuteToolCall = vi.mocked(executeToolCall); + mockShutdownTelemetry = vi.mocked(shutdownTelemetry); + + consoleErrorSpy = vi.spyOn(console, 'error').mockImplementation(() => {}); + processExitSpy = vi + .spyOn(process, 'exit') + .mockImplementation((() => {}) as (code?: number) => never); + processStdoutSpy = vi + .spyOn(process.stdout, 'write') + .mockImplementation(() => true); + mockToolRegistry = { - getFunctionDeclarations: vi.fn().mockReturnValue([]), getTool: vi.fn(), + getFunctionDeclarations: vi.fn().mockReturnValue([]), } as unknown as ToolRegistry; - vi.mocked(GeminiClient).mockImplementation(() => mockGeminiClient); - vi.mocked(ToolRegistry).mockImplementation(() => mockToolRegistry); + mockGeminiClient = { + sendMessageStream: vi.fn(), + }; mockConfig = { - getToolRegistry: vi.fn().mockReturnValue(mockToolRegistry), + initialize: vi.fn().mockResolvedValue(undefined), getGeminiClient: vi.fn().mockReturnValue(mockGeminiClient), - getContentGeneratorConfig: vi.fn().mockReturnValue({}), + getToolRegistry: vi.fn().mockResolvedValue(mockToolRegistry), getMaxSessionTurns: vi.fn().mockReturnValue(10), - initialize: vi.fn(), + getIdeMode: vi.fn().mockReturnValue(false), + getFullContext: vi.fn().mockReturnValue(false), + getContentGeneratorConfig: vi.fn().mockReturnValue({}), } as unknown as Config; - - mockProcessStdoutWrite = vi.fn().mockImplementation(() => true); - process.stdout.write = mockProcessStdoutWrite as any; // Use any to bypass strict signature matching for mock - mockProcessExit = vi - .fn() - .mockImplementation((_code?: number) => undefined as never); - process.exit = mockProcessExit as any; // Use any for process.exit mock }); afterEach(() => { vi.restoreAllMocks(); - // Restore original process methods if they were globally patched - // This might require storing the original methods before patching them in beforeEach }); + async function* createStreamFromEvents( + events: ServerGeminiStreamEvent[], + ): AsyncGenerator { + for (const event of events) { + yield event; + } + } + it('should process input and write text output', async () => { - const inputStream = (async function* () { - yield { - candidates: [{ content: { parts: [{ text: 'Hello' }] } }], - } as GenerateContentResponse; - yield { - candidates: [{ content: { parts: [{ text: ' World' }] } }], - } as GenerateContentResponse; - })(); - mockChat.sendMessageStream.mockResolvedValue(inputStream); + const events: ServerGeminiStreamEvent[] = [ + { type: GeminiEventType.Content, value: 'Hello' }, + { type: GeminiEventType.Content, value: ' World' }, + ]; + mockGeminiClient.sendMessageStream.mockReturnValue( + createStreamFromEvents(events), + ); await runNonInteractive(mockConfig, 'Test input', 'prompt-id-1'); - expect(mockChat.sendMessageStream).toHaveBeenCalledWith( - { - message: [{ text: 'Test input' }], - config: { - abortSignal: expect.any(AbortSignal), - tools: [{ functionDeclarations: [] }], - }, - }, - expect.any(String), + expect(mockGeminiClient.sendMessageStream).toHaveBeenCalledWith( + [{ text: 'Test input' }], + expect.any(AbortSignal), + 'prompt-id-1', ); - expect(mockProcessStdoutWrite).toHaveBeenCalledWith('Hello'); - expect(mockProcessStdoutWrite).toHaveBeenCalledWith(' World'); - expect(mockProcessStdoutWrite).toHaveBeenCalledWith('\n'); + expect(processStdoutSpy).toHaveBeenCalledWith('Hello'); + expect(processStdoutSpy).toHaveBeenCalledWith(' World'); + expect(processStdoutSpy).toHaveBeenCalledWith('\n'); + expect(mockShutdownTelemetry).toHaveBeenCalled(); }); it('should handle a single tool call and respond', async () => { - const functionCall: FunctionCall = { - id: 'fc1', - name: 'testTool', - args: { p: 'v' }, - }; - const toolResponsePart: Part = { - functionResponse: { + const toolCallEvent: ServerGeminiStreamEvent = { + type: GeminiEventType.ToolCallRequest, + value: { + callId: 'tool-1', name: 'testTool', - id: 'fc1', - response: { result: 'tool success' }, + args: { arg1: 'value1' }, + isClientInitiated: false, + prompt_id: 'prompt-id-2', }, }; + const toolResponse: Part[] = [{ text: 'Tool response' }]; + mockCoreExecuteToolCall.mockResolvedValue({ responseParts: toolResponse }); - const { executeToolCall: mockCoreExecuteToolCall } = await import( - '@qwen-code/qwen-code-core' - ); - vi.mocked(mockCoreExecuteToolCall).mockResolvedValue({ - callId: 'fc1', - responseParts: [toolResponsePart], - resultDisplay: 'Tool success display', - error: undefined, - }); + const firstCallEvents: ServerGeminiStreamEvent[] = [toolCallEvent]; + const secondCallEvents: ServerGeminiStreamEvent[] = [ + { type: GeminiEventType.Content, value: 'Final answer' }, + ]; - const stream1 = (async function* () { - yield { functionCalls: [functionCall] } as GenerateContentResponse; - })(); - const stream2 = (async function* () { - yield { - candidates: [{ content: { parts: [{ text: 'Final answer' }] } }], - } as GenerateContentResponse; - })(); - mockChat.sendMessageStream - .mockResolvedValueOnce(stream1) - .mockResolvedValueOnce(stream2); + mockGeminiClient.sendMessageStream + .mockReturnValueOnce(createStreamFromEvents(firstCallEvents)) + .mockReturnValueOnce(createStreamFromEvents(secondCallEvents)); await runNonInteractive(mockConfig, 'Use a tool', 'prompt-id-2'); - expect(mockChat.sendMessageStream).toHaveBeenCalledTimes(2); + expect(mockGeminiClient.sendMessageStream).toHaveBeenCalledTimes(2); expect(mockCoreExecuteToolCall).toHaveBeenCalledWith( mockConfig, - expect.objectContaining({ callId: 'fc1', name: 'testTool' }), + expect.objectContaining({ name: 'testTool' }), mockToolRegistry, expect.any(AbortSignal), ); - expect(mockChat.sendMessageStream).toHaveBeenLastCalledWith( - expect.objectContaining({ - message: [toolResponsePart], - }), - expect.any(String), + expect(mockGeminiClient.sendMessageStream).toHaveBeenNthCalledWith( + 2, + [{ text: 'Tool response' }], + expect.any(AbortSignal), + 'prompt-id-2', ); - expect(mockProcessStdoutWrite).toHaveBeenCalledWith('Final answer'); + expect(processStdoutSpy).toHaveBeenCalledWith('Final answer'); + expect(processStdoutSpy).toHaveBeenCalledWith('\n'); }); it('should handle error during tool execution', async () => { - const functionCall: FunctionCall = { - id: 'fcError', - name: 'errorTool', - args: {}, - }; - const errorResponsePart: Part = { - functionResponse: { + const toolCallEvent: ServerGeminiStreamEvent = { + type: GeminiEventType.ToolCallRequest, + value: { + callId: 'tool-1', name: 'errorTool', - id: 'fcError', - response: { error: 'Tool failed' }, + args: {}, + isClientInitiated: false, + prompt_id: 'prompt-id-3', }, }; - - const { executeToolCall: mockCoreExecuteToolCall } = await import( - '@qwen-code/qwen-code-core' - ); - vi.mocked(mockCoreExecuteToolCall).mockResolvedValue({ - callId: 'fcError', - responseParts: [errorResponsePart], - resultDisplay: 'Tool execution failed badly', - error: new Error('Tool failed'), + mockCoreExecuteToolCall.mockResolvedValue({ + error: new Error('Tool execution failed badly'), + errorType: ToolErrorType.UNHANDLED_EXCEPTION, }); - - const stream1 = (async function* () { - yield { functionCalls: [functionCall] } as GenerateContentResponse; - })(); - - const stream2 = (async function* () { - yield { - candidates: [ - { content: { parts: [{ text: 'Could not complete request.' }] } }, - ], - } as GenerateContentResponse; - })(); - mockChat.sendMessageStream - .mockResolvedValueOnce(stream1) - .mockResolvedValueOnce(stream2); - const consoleErrorSpy = vi - .spyOn(console, 'error') - .mockImplementation(() => {}); + mockGeminiClient.sendMessageStream.mockReturnValue( + createStreamFromEvents([toolCallEvent]), + ); await runNonInteractive(mockConfig, 'Trigger tool error', 'prompt-id-3'); @@ -201,75 +174,48 @@ describe('runNonInteractive', () => { expect(consoleErrorSpy).toHaveBeenCalledWith( 'Error executing tool errorTool: Tool execution failed badly', ); - expect(mockChat.sendMessageStream).toHaveBeenLastCalledWith( - expect.objectContaining({ - message: [errorResponsePart], - }), - expect.any(String), - ); - expect(mockProcessStdoutWrite).toHaveBeenCalledWith( - 'Could not complete request.', - ); + expect(processExitSpy).toHaveBeenCalledWith(1); }); it('should exit with error if sendMessageStream throws initially', async () => { const apiError = new Error('API connection failed'); - mockChat.sendMessageStream.mockRejectedValue(apiError); - const consoleErrorSpy = vi - .spyOn(console, 'error') - .mockImplementation(() => {}); + mockGeminiClient.sendMessageStream.mockImplementation(() => { + throw apiError; + }); await runNonInteractive(mockConfig, 'Initial fail', 'prompt-id-4'); expect(consoleErrorSpy).toHaveBeenCalledWith( '[API Error: API connection failed]', ); + expect(processExitSpy).toHaveBeenCalledWith(1); }); it('should not exit if a tool is not found, and should send error back to model', async () => { - const functionCall: FunctionCall = { - id: 'fcNotFound', - name: 'nonexistentTool', - args: {}, - }; - const errorResponsePart: Part = { - functionResponse: { + const toolCallEvent: ServerGeminiStreamEvent = { + type: GeminiEventType.ToolCallRequest, + value: { + callId: 'tool-1', name: 'nonexistentTool', - id: 'fcNotFound', - response: { error: 'Tool "nonexistentTool" not found in registry.' }, + args: {}, + isClientInitiated: false, + prompt_id: 'prompt-id-5', }, }; - - const { executeToolCall: mockCoreExecuteToolCall } = await import( - '@qwen-code/qwen-code-core' - ); - vi.mocked(mockCoreExecuteToolCall).mockResolvedValue({ - callId: 'fcNotFound', - responseParts: [errorResponsePart], - resultDisplay: 'Tool "nonexistentTool" not found in registry.', + mockCoreExecuteToolCall.mockResolvedValue({ error: new Error('Tool "nonexistentTool" not found in registry.'), + resultDisplay: 'Tool "nonexistentTool" not found in registry.', }); + const finalResponse: ServerGeminiStreamEvent[] = [ + { + type: GeminiEventType.Content, + value: "Sorry, I can't find that tool.", + }, + ]; - const stream1 = (async function* () { - yield { functionCalls: [functionCall] } as GenerateContentResponse; - })(); - const stream2 = (async function* () { - yield { - candidates: [ - { - content: { - parts: [{ text: 'Unfortunately the tool does not exist.' }], - }, - }, - ], - } as GenerateContentResponse; - })(); - mockChat.sendMessageStream - .mockResolvedValueOnce(stream1) - .mockResolvedValueOnce(stream2); - const consoleErrorSpy = vi - .spyOn(console, 'error') - .mockImplementation(() => {}); + mockGeminiClient.sendMessageStream + .mockReturnValueOnce(createStreamFromEvents([toolCallEvent])) + .mockReturnValueOnce(createStreamFromEvents(finalResponse)); await runNonInteractive( mockConfig, @@ -277,68 +223,22 @@ describe('runNonInteractive', () => { 'prompt-id-5', ); + expect(mockCoreExecuteToolCall).toHaveBeenCalled(); expect(consoleErrorSpy).toHaveBeenCalledWith( 'Error executing tool nonexistentTool: Tool "nonexistentTool" not found in registry.', ); - - expect(mockProcessExit).not.toHaveBeenCalled(); - - expect(mockChat.sendMessageStream).toHaveBeenCalledTimes(2); - expect(mockChat.sendMessageStream).toHaveBeenLastCalledWith( - expect.objectContaining({ - message: [errorResponsePart], - }), - expect.any(String), - ); - - expect(mockProcessStdoutWrite).toHaveBeenCalledWith( - 'Unfortunately the tool does not exist.', + expect(processExitSpy).not.toHaveBeenCalled(); + expect(mockGeminiClient.sendMessageStream).toHaveBeenCalledTimes(2); + expect(processStdoutSpy).toHaveBeenCalledWith( + "Sorry, I can't find that tool.", ); }); it('should exit when max session turns are exceeded', async () => { - const functionCall: FunctionCall = { - id: 'fcLoop', - name: 'loopTool', - args: {}, - }; - const toolResponsePart: Part = { - functionResponse: { - name: 'loopTool', - id: 'fcLoop', - response: { result: 'still looping' }, - }, - }; - - // Config with a max turn of 1 - vi.mocked(mockConfig.getMaxSessionTurns).mockReturnValue(1); - - const { executeToolCall: mockCoreExecuteToolCall } = await import( - '@qwen-code/qwen-code-core' - ); - vi.mocked(mockCoreExecuteToolCall).mockResolvedValue({ - callId: 'fcLoop', - responseParts: [toolResponsePart], - resultDisplay: 'Still looping', - error: undefined, - }); - - const stream = (async function* () { - yield { functionCalls: [functionCall] } as GenerateContentResponse; - })(); - - mockChat.sendMessageStream.mockResolvedValue(stream); - const consoleErrorSpy = vi - .spyOn(console, 'error') - .mockImplementation(() => {}); - - await runNonInteractive(mockConfig, 'Trigger loop'); - - expect(mockChat.sendMessageStream).toHaveBeenCalledTimes(1); + vi.mocked(mockConfig.getMaxSessionTurns).mockReturnValue(0); + await runNonInteractive(mockConfig, 'Trigger loop', 'prompt-id-6'); expect(consoleErrorSpy).toHaveBeenCalledWith( - ` - Reached max session turns for this session. Increase the number of turns by specifying maxSessionTurns in settings.json.`, + '\n Reached max session turns for this session. Increase the number of turns by specifying maxSessionTurns in settings.json.', ); - expect(mockProcessExit).not.toHaveBeenCalled(); }); }); diff --git a/packages/cli/src/nonInteractiveCli.ts b/packages/cli/src/nonInteractiveCli.ts index d3d646e9..8826835f 100644 --- a/packages/cli/src/nonInteractiveCli.ts +++ b/packages/cli/src/nonInteractiveCli.ts @@ -11,38 +11,13 @@ import { ToolRegistry, shutdownTelemetry, isTelemetrySdkInitialized, + GeminiEventType, + ToolErrorType, } from '@qwen-code/qwen-code-core'; -import { - Content, - Part, - FunctionCall, - GenerateContentResponse, -} from '@google/genai'; +import { Content, Part, FunctionCall } from '@google/genai'; import { parseAndFormatApiError } from './ui/utils/errorParsing.js'; -function getResponseText(response: GenerateContentResponse): string | null { - if (response.candidates && response.candidates.length > 0) { - const candidate = response.candidates[0]; - if ( - candidate.content && - candidate.content.parts && - candidate.content.parts.length > 0 - ) { - // We are running in headless mode so we don't need to return thoughts to STDOUT. - const thoughtPart = candidate.content.parts[0]; - if (thoughtPart?.thought) { - return null; - } - return candidate.content.parts - .filter((part) => part.text) - .map((part) => part.text) - .join(''); - } - } - return null; -} - export async function runNonInteractive( config: Config, input: string, @@ -60,7 +35,6 @@ export async function runNonInteractive( const geminiClient = config.getGeminiClient(); const toolRegistry: ToolRegistry = await config.getToolRegistry(); - const chat = await geminiClient.getChat(); const abortController = new AbortController(); let currentMessages: Content[] = [{ role: 'user', parts: [{ text: input }] }]; let turnCount = 0; @@ -68,7 +42,7 @@ export async function runNonInteractive( while (true) { turnCount++; if ( - config.getMaxSessionTurns() > 0 && + config.getMaxSessionTurns() >= 0 && turnCount > config.getMaxSessionTurns() ) { console.error( @@ -78,30 +52,28 @@ export async function runNonInteractive( } const functionCalls: FunctionCall[] = []; - const responseStream = await chat.sendMessageStream( - { - message: currentMessages[0]?.parts || [], // Ensure parts are always provided - config: { - abortSignal: abortController.signal, - tools: [ - { functionDeclarations: toolRegistry.getFunctionDeclarations() }, - ], - }, - }, + const responseStream = geminiClient.sendMessageStream( + currentMessages[0]?.parts || [], + abortController.signal, prompt_id, ); - for await (const resp of responseStream) { + for await (const event of responseStream) { if (abortController.signal.aborted) { console.error('Operation cancelled.'); return; } - const textPart = getResponseText(resp); - if (textPart) { - process.stdout.write(textPart); - } - if (resp.functionCalls) { - functionCalls.push(...resp.functionCalls); + + if (event.type === GeminiEventType.Content) { + process.stdout.write(event.value); + } else if (event.type === GeminiEventType.ToolCallRequest) { + const toolCallRequest = event.value; + const fc: FunctionCall = { + name: toolCallRequest.name, + args: toolCallRequest.args, + id: toolCallRequest.callId, + }; + functionCalls.push(fc); } } @@ -126,15 +98,11 @@ export async function runNonInteractive( ); if (toolResponse.error) { - const isToolNotFound = toolResponse.error.message.includes( - 'not found in registry', - ); console.error( `Error executing tool ${fc.name}: ${toolResponse.resultDisplay || toolResponse.error.message}`, ); - if (!isToolNotFound) { + if (toolResponse.errorType === ToolErrorType.UNHANDLED_EXCEPTION) process.exit(1); - } } if (toolResponse.responseParts) { diff --git a/packages/cli/src/services/BuiltinCommandLoader.ts b/packages/cli/src/services/BuiltinCommandLoader.ts index ebceba53..87d9af85 100644 --- a/packages/cli/src/services/BuiltinCommandLoader.ts +++ b/packages/cli/src/services/BuiltinCommandLoader.ts @@ -16,10 +16,12 @@ import { compressCommand } from '../ui/commands/compressCommand.js'; import { copyCommand } from '../ui/commands/copyCommand.js'; import { corgiCommand } from '../ui/commands/corgiCommand.js'; import { docsCommand } from '../ui/commands/docsCommand.js'; +import { directoryCommand } from '../ui/commands/directoryCommand.js'; import { editorCommand } from '../ui/commands/editorCommand.js'; import { extensionsCommand } from '../ui/commands/extensionsCommand.js'; import { helpCommand } from '../ui/commands/helpCommand.js'; import { ideCommand } from '../ui/commands/ideCommand.js'; +import { initCommand } from '../ui/commands/initCommand.js'; import { mcpCommand } from '../ui/commands/mcpCommand.js'; import { memoryCommand } from '../ui/commands/memoryCommand.js'; import { privacyCommand } from '../ui/commands/privacyCommand.js'; @@ -29,6 +31,8 @@ import { statsCommand } from '../ui/commands/statsCommand.js'; import { themeCommand } from '../ui/commands/themeCommand.js'; import { toolsCommand } from '../ui/commands/toolsCommand.js'; import { vimCommand } from '../ui/commands/vimCommand.js'; +import { setupGithubCommand } from '../ui/commands/setupGithubCommand.js'; +import { isGitHubRepository } from '../utils/gitUtils.js'; /** * Loads the core, hard-coded slash commands that are an integral part @@ -55,19 +59,22 @@ export class BuiltinCommandLoader implements ICommandLoader { copyCommand, corgiCommand, docsCommand, + directoryCommand, editorCommand, extensionsCommand, helpCommand, ideCommand(this.config), + initCommand, + mcpCommand, memoryCommand, privacyCommand, - mcpCommand, quitCommand, restoreCommand(this.config), statsCommand, themeCommand, toolsCommand, vimCommand, + ...(isGitHubRepository() ? [setupGithubCommand] : []), ]; return allDefinitions.filter((cmd): cmd is SlashCommand => cmd !== null); diff --git a/packages/cli/src/services/CommandService.test.ts b/packages/cli/src/services/CommandService.test.ts index 28731f81..e2d5b9f5 100644 --- a/packages/cli/src/services/CommandService.test.ts +++ b/packages/cli/src/services/CommandService.test.ts @@ -177,4 +177,176 @@ describe('CommandService', () => { expect(loader2.loadCommands).toHaveBeenCalledTimes(1); expect(loader2.loadCommands).toHaveBeenCalledWith(signal); }); + + it('should rename extension commands when they conflict', async () => { + const builtinCommand = createMockCommand('deploy', CommandKind.BUILT_IN); + const userCommand = createMockCommand('sync', CommandKind.FILE); + const extensionCommand1 = { + ...createMockCommand('deploy', CommandKind.FILE), + extensionName: 'firebase', + description: '[firebase] Deploy to Firebase', + }; + const extensionCommand2 = { + ...createMockCommand('sync', CommandKind.FILE), + extensionName: 'git-helper', + description: '[git-helper] Sync with remote', + }; + + const mockLoader1 = new MockCommandLoader([builtinCommand]); + const mockLoader2 = new MockCommandLoader([ + userCommand, + extensionCommand1, + extensionCommand2, + ]); + + const service = await CommandService.create( + [mockLoader1, mockLoader2], + new AbortController().signal, + ); + + const commands = service.getCommands(); + expect(commands).toHaveLength(4); + + // Built-in command keeps original name + const deployBuiltin = commands.find( + (cmd) => cmd.name === 'deploy' && !cmd.extensionName, + ); + expect(deployBuiltin).toBeDefined(); + expect(deployBuiltin?.kind).toBe(CommandKind.BUILT_IN); + + // Extension command conflicting with built-in gets renamed + const deployExtension = commands.find( + (cmd) => cmd.name === 'firebase.deploy', + ); + expect(deployExtension).toBeDefined(); + expect(deployExtension?.extensionName).toBe('firebase'); + + // User command keeps original name + const syncUser = commands.find( + (cmd) => cmd.name === 'sync' && !cmd.extensionName, + ); + expect(syncUser).toBeDefined(); + expect(syncUser?.kind).toBe(CommandKind.FILE); + + // Extension command conflicting with user command gets renamed + const syncExtension = commands.find( + (cmd) => cmd.name === 'git-helper.sync', + ); + expect(syncExtension).toBeDefined(); + expect(syncExtension?.extensionName).toBe('git-helper'); + }); + + it('should handle user/project command override correctly', async () => { + const builtinCommand = createMockCommand('help', CommandKind.BUILT_IN); + const userCommand = createMockCommand('help', CommandKind.FILE); + const projectCommand = createMockCommand('deploy', CommandKind.FILE); + const userDeployCommand = createMockCommand('deploy', CommandKind.FILE); + + const mockLoader1 = new MockCommandLoader([builtinCommand]); + const mockLoader2 = new MockCommandLoader([ + userCommand, + userDeployCommand, + projectCommand, + ]); + + const service = await CommandService.create( + [mockLoader1, mockLoader2], + new AbortController().signal, + ); + + const commands = service.getCommands(); + expect(commands).toHaveLength(2); + + // User command overrides built-in + const helpCommand = commands.find((cmd) => cmd.name === 'help'); + expect(helpCommand).toBeDefined(); + expect(helpCommand?.kind).toBe(CommandKind.FILE); + + // Project command overrides user command (last wins) + const deployCommand = commands.find((cmd) => cmd.name === 'deploy'); + expect(deployCommand).toBeDefined(); + expect(deployCommand?.kind).toBe(CommandKind.FILE); + }); + + it('should handle secondary conflicts when renaming extension commands', async () => { + // User has both /deploy and /gcp.deploy commands + const userCommand1 = createMockCommand('deploy', CommandKind.FILE); + const userCommand2 = createMockCommand('gcp.deploy', CommandKind.FILE); + + // Extension also has a deploy command that will conflict with user's /deploy + const extensionCommand = { + ...createMockCommand('deploy', CommandKind.FILE), + extensionName: 'gcp', + description: '[gcp] Deploy to Google Cloud', + }; + + const mockLoader = new MockCommandLoader([ + userCommand1, + userCommand2, + extensionCommand, + ]); + + const service = await CommandService.create( + [mockLoader], + new AbortController().signal, + ); + + const commands = service.getCommands(); + expect(commands).toHaveLength(3); + + // Original user command keeps its name + const deployUser = commands.find( + (cmd) => cmd.name === 'deploy' && !cmd.extensionName, + ); + expect(deployUser).toBeDefined(); + + // User's dot notation command keeps its name + const gcpDeployUser = commands.find( + (cmd) => cmd.name === 'gcp.deploy' && !cmd.extensionName, + ); + expect(gcpDeployUser).toBeDefined(); + + // Extension command gets renamed with suffix due to secondary conflict + const deployExtension = commands.find( + (cmd) => cmd.name === 'gcp.deploy1' && cmd.extensionName === 'gcp', + ); + expect(deployExtension).toBeDefined(); + expect(deployExtension?.description).toBe('[gcp] Deploy to Google Cloud'); + }); + + it('should handle multiple secondary conflicts with incrementing suffixes', async () => { + // User has /deploy, /gcp.deploy, and /gcp.deploy1 + const userCommand1 = createMockCommand('deploy', CommandKind.FILE); + const userCommand2 = createMockCommand('gcp.deploy', CommandKind.FILE); + const userCommand3 = createMockCommand('gcp.deploy1', CommandKind.FILE); + + // Extension has a deploy command + const extensionCommand = { + ...createMockCommand('deploy', CommandKind.FILE), + extensionName: 'gcp', + description: '[gcp] Deploy to Google Cloud', + }; + + const mockLoader = new MockCommandLoader([ + userCommand1, + userCommand2, + userCommand3, + extensionCommand, + ]); + + const service = await CommandService.create( + [mockLoader], + new AbortController().signal, + ); + + const commands = service.getCommands(); + expect(commands).toHaveLength(4); + + // Extension command gets renamed with suffix 2 due to multiple conflicts + const deployExtension = commands.find( + (cmd) => cmd.name === 'gcp.deploy2' && cmd.extensionName === 'gcp', + ); + expect(deployExtension).toBeDefined(); + expect(deployExtension?.description).toBe('[gcp] Deploy to Google Cloud'); + }); }); diff --git a/packages/cli/src/services/CommandService.ts b/packages/cli/src/services/CommandService.ts index ef4f4d14..78e4817b 100644 --- a/packages/cli/src/services/CommandService.ts +++ b/packages/cli/src/services/CommandService.ts @@ -30,13 +30,17 @@ export class CommandService { * * This factory method orchestrates the entire command loading process. It * runs all provided loaders in parallel, aggregates their results, handles - * name conflicts by letting the last-loaded command win, and then returns a + * name conflicts for extension commands by renaming them, and then returns a * fully constructed `CommandService` instance. * + * Conflict resolution: + * - Extension commands that conflict with existing commands are renamed to + * `extensionName.commandName` + * - Non-extension commands (built-in, user, project) override earlier commands + * with the same name based on loader order + * * @param loaders An array of objects that conform to the `ICommandLoader` - * interface. The order of loaders is significant: if multiple loaders - * provide a command with the same name, the command from the loader that - * appears later in the array will take precedence. + * interface. Built-in commands should come first, followed by FileCommandLoader. * @param signal An AbortSignal to cancel the loading process. * @returns A promise that resolves to a new, fully initialized `CommandService` instance. */ @@ -57,12 +61,28 @@ export class CommandService { } } - // De-duplicate commands using a Map. The last one found with a given name wins. - // This creates a natural override system based on the order of the loaders - // passed to the constructor. const commandMap = new Map(); for (const cmd of allCommands) { - commandMap.set(cmd.name, cmd); + let finalName = cmd.name; + + // Extension commands get renamed if they conflict with existing commands + if (cmd.extensionName && commandMap.has(cmd.name)) { + let renamedName = `${cmd.extensionName}.${cmd.name}`; + let suffix = 1; + + // Keep trying until we find a name that doesn't conflict + while (commandMap.has(renamedName)) { + renamedName = `${cmd.extensionName}.${cmd.name}${suffix}`; + suffix++; + } + + finalName = renamedName; + } + + commandMap.set(finalName, { + ...cmd, + name: finalName, + }); } const finalCommands = Object.freeze(Array.from(commandMap.values())); diff --git a/packages/cli/src/services/FileCommandLoader.test.ts b/packages/cli/src/services/FileCommandLoader.test.ts index fb565f41..48d6ab1b 100644 --- a/packages/cli/src/services/FileCommandLoader.test.ts +++ b/packages/cli/src/services/FileCommandLoader.test.ts @@ -4,13 +4,14 @@ * SPDX-License-Identifier: Apache-2.0 */ -import { FileCommandLoader } from './FileCommandLoader.js'; +import * as path from 'node:path'; import { Config, getProjectCommandsDir, getUserCommandsDir, } from '@qwen-code/qwen-code-core'; import mock from 'mock-fs'; +import { FileCommandLoader } from './FileCommandLoader.js'; import { assert, vi } from 'vitest'; import { createMockCommandContext } from '../test-utils/mockCommandContext.js'; import { @@ -85,7 +86,7 @@ describe('FileCommandLoader', () => { }, }); - const loader = new FileCommandLoader(null as unknown as Config); + const loader = new FileCommandLoader(null); const commands = await loader.loadCommands(signal); expect(commands).toHaveLength(1); @@ -176,7 +177,7 @@ describe('FileCommandLoader', () => { }, }); - const loader = new FileCommandLoader(null as unknown as Config); + const loader = new FileCommandLoader(null); const commands = await loader.loadCommands(signal); expect(commands).toHaveLength(2); @@ -194,9 +195,11 @@ describe('FileCommandLoader', () => { }, }, }); - const loader = new FileCommandLoader({ - getProjectRoot: () => '/path/to/project', - } as Config); + const mockConfig = { + getProjectRoot: vi.fn(() => '/path/to/project'), + getExtensions: vi.fn(() => []), + } as Config; + const loader = new FileCommandLoader(mockConfig); const commands = await loader.loadCommands(signal); expect(commands).toHaveLength(1); expect(commands[0]!.name).toBe('gcp:pipelines:run'); @@ -212,7 +215,7 @@ describe('FileCommandLoader', () => { }, }); - const loader = new FileCommandLoader(null as unknown as Config); + const loader = new FileCommandLoader(null); const commands = await loader.loadCommands(signal); expect(commands).toHaveLength(1); @@ -221,7 +224,7 @@ describe('FileCommandLoader', () => { expect(command.name).toBe('git:commit'); }); - it('overrides user commands with project commands', async () => { + it('returns both user and project commands in order', async () => { const userCommandsDir = getUserCommandsDir(); const projectCommandsDir = getProjectCommandsDir(process.cwd()); mock({ @@ -233,16 +236,15 @@ describe('FileCommandLoader', () => { }, }); - const loader = new FileCommandLoader({ - getProjectRoot: () => process.cwd(), - } as Config); + const mockConfig = { + getProjectRoot: vi.fn(() => process.cwd()), + getExtensions: vi.fn(() => []), + } as Config; + const loader = new FileCommandLoader(mockConfig); const commands = await loader.loadCommands(signal); - expect(commands).toHaveLength(1); - const command = commands[0]; - expect(command).toBeDefined(); - - const result = await command.action?.( + expect(commands).toHaveLength(2); + const userResult = await commands[0].action?.( createMockCommandContext({ invocation: { raw: '/test', @@ -252,10 +254,25 @@ describe('FileCommandLoader', () => { }), '', ); - if (result?.type === 'submit_prompt') { - expect(result.content).toBe('Project prompt'); + if (userResult?.type === 'submit_prompt') { + expect(userResult.content).toBe('User prompt'); } else { - assert.fail('Incorrect action type'); + assert.fail('Incorrect action type for user command'); + } + const projectResult = await commands[1].action?.( + createMockCommandContext({ + invocation: { + raw: '/test', + name: 'test', + args: '', + }, + }), + '', + ); + if (projectResult?.type === 'submit_prompt') { + expect(projectResult.content).toBe('Project prompt'); + } else { + assert.fail('Incorrect action type for project command'); } }); @@ -268,7 +285,7 @@ describe('FileCommandLoader', () => { }, }); - const loader = new FileCommandLoader(null as unknown as Config); + const loader = new FileCommandLoader(null); const commands = await loader.loadCommands(signal); expect(commands).toHaveLength(1); @@ -284,7 +301,7 @@ describe('FileCommandLoader', () => { }, }); - const loader = new FileCommandLoader(null as unknown as Config); + const loader = new FileCommandLoader(null); const commands = await loader.loadCommands(signal); expect(commands).toHaveLength(1); @@ -299,7 +316,7 @@ describe('FileCommandLoader', () => { }, }); - const loader = new FileCommandLoader(null as unknown as Config); + const loader = new FileCommandLoader(null); const commands = await loader.loadCommands(signal); const command = commands[0]; expect(command).toBeDefined(); @@ -308,7 +325,7 @@ describe('FileCommandLoader', () => { it('handles file system errors gracefully', async () => { mock({}); // Mock an empty file system - const loader = new FileCommandLoader(null as unknown as Config); + const loader = new FileCommandLoader(null); const commands = await loader.loadCommands(signal); expect(commands).toHaveLength(0); }); @@ -321,7 +338,7 @@ describe('FileCommandLoader', () => { }, }); - const loader = new FileCommandLoader(null as unknown as Config); + const loader = new FileCommandLoader(null); const commands = await loader.loadCommands(signal); const command = commands[0]; expect(command).toBeDefined(); @@ -336,7 +353,7 @@ describe('FileCommandLoader', () => { }, }); - const loader = new FileCommandLoader(null as unknown as Config); + const loader = new FileCommandLoader(null); const commands = await loader.loadCommands(signal); const command = commands[0]; expect(command).toBeDefined(); @@ -351,7 +368,7 @@ describe('FileCommandLoader', () => { }, }); - const loader = new FileCommandLoader(null as unknown as Config); + const loader = new FileCommandLoader(null); const commands = await loader.loadCommands(signal); expect(commands).toHaveLength(1); @@ -362,6 +379,298 @@ describe('FileCommandLoader', () => { expect(command.name).toBe('legacy_command'); }); + describe('Extension Command Loading', () => { + it('loads commands from active extensions', async () => { + const userCommandsDir = getUserCommandsDir(); + const projectCommandsDir = getProjectCommandsDir(process.cwd()); + const extensionDir = path.join( + process.cwd(), + '.gemini/extensions/test-ext', + ); + + mock({ + [userCommandsDir]: { + 'user.toml': 'prompt = "User command"', + }, + [projectCommandsDir]: { + 'project.toml': 'prompt = "Project command"', + }, + [extensionDir]: { + 'gemini-extension.json': JSON.stringify({ + name: 'test-ext', + version: '1.0.0', + }), + commands: { + 'ext.toml': 'prompt = "Extension command"', + }, + }, + }); + + const mockConfig = { + getProjectRoot: vi.fn(() => process.cwd()), + getExtensions: vi.fn(() => [ + { + name: 'test-ext', + version: '1.0.0', + isActive: true, + path: extensionDir, + }, + ]), + } as Config; + const loader = new FileCommandLoader(mockConfig); + const commands = await loader.loadCommands(signal); + + expect(commands).toHaveLength(3); + const commandNames = commands.map((cmd) => cmd.name); + expect(commandNames).toEqual(['user', 'project', 'ext']); + + const extCommand = commands.find((cmd) => cmd.name === 'ext'); + expect(extCommand?.extensionName).toBe('test-ext'); + expect(extCommand?.description).toMatch(/^\[test-ext\]/); + }); + + it('extension commands have extensionName metadata for conflict resolution', async () => { + const userCommandsDir = getUserCommandsDir(); + const projectCommandsDir = getProjectCommandsDir(process.cwd()); + const extensionDir = path.join( + process.cwd(), + '.gemini/extensions/test-ext', + ); + + mock({ + [extensionDir]: { + 'gemini-extension.json': JSON.stringify({ + name: 'test-ext', + version: '1.0.0', + }), + commands: { + 'deploy.toml': 'prompt = "Extension deploy command"', + }, + }, + [userCommandsDir]: { + 'deploy.toml': 'prompt = "User deploy command"', + }, + [projectCommandsDir]: { + 'deploy.toml': 'prompt = "Project deploy command"', + }, + }); + + const mockConfig = { + getProjectRoot: vi.fn(() => process.cwd()), + getExtensions: vi.fn(() => [ + { + name: 'test-ext', + version: '1.0.0', + isActive: true, + path: extensionDir, + }, + ]), + } as Config; + const loader = new FileCommandLoader(mockConfig); + const commands = await loader.loadCommands(signal); + + // Return all commands, even duplicates + expect(commands).toHaveLength(3); + + expect(commands[0].name).toBe('deploy'); + expect(commands[0].extensionName).toBeUndefined(); + const result0 = await commands[0].action?.( + createMockCommandContext({ + invocation: { + raw: '/deploy', + name: 'deploy', + args: '', + }, + }), + '', + ); + expect(result0?.type).toBe('submit_prompt'); + if (result0?.type === 'submit_prompt') { + expect(result0.content).toBe('User deploy command'); + } + + expect(commands[1].name).toBe('deploy'); + expect(commands[1].extensionName).toBeUndefined(); + const result1 = await commands[1].action?.( + createMockCommandContext({ + invocation: { + raw: '/deploy', + name: 'deploy', + args: '', + }, + }), + '', + ); + expect(result1?.type).toBe('submit_prompt'); + if (result1?.type === 'submit_prompt') { + expect(result1.content).toBe('Project deploy command'); + } + + expect(commands[2].name).toBe('deploy'); + expect(commands[2].extensionName).toBe('test-ext'); + expect(commands[2].description).toMatch(/^\[test-ext\]/); + const result2 = await commands[2].action?.( + createMockCommandContext({ + invocation: { + raw: '/deploy', + name: 'deploy', + args: '', + }, + }), + '', + ); + expect(result2?.type).toBe('submit_prompt'); + if (result2?.type === 'submit_prompt') { + expect(result2.content).toBe('Extension deploy command'); + } + }); + + it('only loads commands from active extensions', async () => { + const extensionDir1 = path.join( + process.cwd(), + '.gemini/extensions/active-ext', + ); + const extensionDir2 = path.join( + process.cwd(), + '.gemini/extensions/inactive-ext', + ); + + mock({ + [extensionDir1]: { + 'gemini-extension.json': JSON.stringify({ + name: 'active-ext', + version: '1.0.0', + }), + commands: { + 'active.toml': 'prompt = "Active extension command"', + }, + }, + [extensionDir2]: { + 'gemini-extension.json': JSON.stringify({ + name: 'inactive-ext', + version: '1.0.0', + }), + commands: { + 'inactive.toml': 'prompt = "Inactive extension command"', + }, + }, + }); + + const mockConfig = { + getProjectRoot: vi.fn(() => process.cwd()), + getExtensions: vi.fn(() => [ + { + name: 'active-ext', + version: '1.0.0', + isActive: true, + path: extensionDir1, + }, + { + name: 'inactive-ext', + version: '1.0.0', + isActive: false, + path: extensionDir2, + }, + ]), + } as Config; + const loader = new FileCommandLoader(mockConfig); + const commands = await loader.loadCommands(signal); + + expect(commands).toHaveLength(1); + expect(commands[0].name).toBe('active'); + expect(commands[0].extensionName).toBe('active-ext'); + expect(commands[0].description).toMatch(/^\[active-ext\]/); + }); + + it('handles missing extension commands directory gracefully', async () => { + const extensionDir = path.join( + process.cwd(), + '.gemini/extensions/no-commands', + ); + + mock({ + [extensionDir]: { + 'gemini-extension.json': JSON.stringify({ + name: 'no-commands', + version: '1.0.0', + }), + // No commands directory + }, + }); + + const mockConfig = { + getProjectRoot: vi.fn(() => process.cwd()), + getExtensions: vi.fn(() => [ + { + name: 'no-commands', + version: '1.0.0', + isActive: true, + path: extensionDir, + }, + ]), + } as Config; + const loader = new FileCommandLoader(mockConfig); + const commands = await loader.loadCommands(signal); + expect(commands).toHaveLength(0); + }); + + it('handles nested command structure in extensions', async () => { + const extensionDir = path.join(process.cwd(), '.gemini/extensions/a'); + + mock({ + [extensionDir]: { + 'gemini-extension.json': JSON.stringify({ + name: 'a', + version: '1.0.0', + }), + commands: { + b: { + 'c.toml': 'prompt = "Nested command from extension a"', + d: { + 'e.toml': 'prompt = "Deeply nested command"', + }, + }, + 'simple.toml': 'prompt = "Simple command"', + }, + }, + }); + + const mockConfig = { + getProjectRoot: vi.fn(() => process.cwd()), + getExtensions: vi.fn(() => [ + { name: 'a', version: '1.0.0', isActive: true, path: extensionDir }, + ]), + } as Config; + const loader = new FileCommandLoader(mockConfig); + const commands = await loader.loadCommands(signal); + + expect(commands).toHaveLength(3); + + const commandNames = commands.map((cmd) => cmd.name).sort(); + expect(commandNames).toEqual(['b:c', 'b:d:e', 'simple']); + + const nestedCmd = commands.find((cmd) => cmd.name === 'b:c'); + expect(nestedCmd?.extensionName).toBe('a'); + expect(nestedCmd?.description).toMatch(/^\[a\]/); + expect(nestedCmd).toBeDefined(); + const result = await nestedCmd!.action?.( + createMockCommandContext({ + invocation: { + raw: '/b:c', + name: 'b:c', + args: '', + }, + }), + '', + ); + if (result?.type === 'submit_prompt') { + expect(result.content).toBe('Nested command from extension a'); + } else { + assert.fail('Incorrect action type'); + } + }); + }); + describe('Shorthand Argument Processor Integration', () => { it('correctly processes a command with {{args}}', async () => { const userCommandsDir = getUserCommandsDir(); diff --git a/packages/cli/src/services/FileCommandLoader.ts b/packages/cli/src/services/FileCommandLoader.ts index 5494ca55..7ea9bb0f 100644 --- a/packages/cli/src/services/FileCommandLoader.ts +++ b/packages/cli/src/services/FileCommandLoader.ts @@ -35,6 +35,11 @@ import { ShellProcessor, } from './prompt-processors/shellProcessor.js'; +interface CommandDirectory { + path: string; + extensionName?: string; +} + /** * Defines the Zod schema for a command definition file. This serves as the * single source of truth for both validation and type inference. @@ -65,13 +70,18 @@ export class FileCommandLoader implements ICommandLoader { } /** - * Loads all commands, applying the precedence rule where project-level - * commands override user-level commands with the same name. + * Loads all commands from user, project, and extension directories. + * Returns commands in order: user → project → extensions (alphabetically). + * + * Order is important for conflict resolution in CommandService: + * - User/project commands (without extensionName) use "last wins" strategy + * - Extension commands (with extensionName) get renamed if conflicts exist + * * @param signal An AbortSignal to cancel the loading process. - * @returns A promise that resolves to an array of loaded SlashCommands. + * @returns A promise that resolves to an array of all loaded SlashCommands. */ async loadCommands(signal: AbortSignal): Promise { - const commandMap = new Map(); + const allCommands: SlashCommand[] = []; const globOptions = { nodir: true, dot: true, @@ -79,54 +89,85 @@ export class FileCommandLoader implements ICommandLoader { follow: true, }; - try { - // User Commands - const userDir = getUserCommandsDir(); - const userFiles = await glob('**/*.toml', { - ...globOptions, - cwd: userDir, - }); - const userCommandPromises = userFiles.map((file) => - this.parseAndAdaptFile(path.join(userDir, file), userDir), - ); - const userCommands = (await Promise.all(userCommandPromises)).filter( - (cmd): cmd is SlashCommand => cmd !== null, - ); - for (const cmd of userCommands) { - commandMap.set(cmd.name, cmd); - } + // Load commands from each directory + const commandDirs = this.getCommandDirectories(); + for (const dirInfo of commandDirs) { + try { + const files = await glob('**/*.toml', { + ...globOptions, + cwd: dirInfo.path, + }); - // Project Commands (these intentionally override user commands) - const projectDir = getProjectCommandsDir(this.projectRoot); - const projectFiles = await glob('**/*.toml', { - ...globOptions, - cwd: projectDir, - }); - const projectCommandPromises = projectFiles.map((file) => - this.parseAndAdaptFile(path.join(projectDir, file), projectDir), - ); - const projectCommands = ( - await Promise.all(projectCommandPromises) - ).filter((cmd): cmd is SlashCommand => cmd !== null); - for (const cmd of projectCommands) { - commandMap.set(cmd.name, cmd); + const commandPromises = files.map((file) => + this.parseAndAdaptFile( + path.join(dirInfo.path, file), + dirInfo.path, + dirInfo.extensionName, + ), + ); + + const commands = (await Promise.all(commandPromises)).filter( + (cmd): cmd is SlashCommand => cmd !== null, + ); + + // Add all commands without deduplication + allCommands.push(...commands); + } catch (error) { + if ((error as NodeJS.ErrnoException).code !== 'ENOENT') { + console.error( + `[FileCommandLoader] Error loading commands from ${dirInfo.path}:`, + error, + ); + } } - } catch (error) { - console.error(`[FileCommandLoader] Error during file search:`, error); } - return Array.from(commandMap.values()); + return allCommands; + } + + /** + * Get all command directories in order for loading. + * User commands → Project commands → Extension commands + * This order ensures extension commands can detect all conflicts. + */ + private getCommandDirectories(): CommandDirectory[] { + const dirs: CommandDirectory[] = []; + + // 1. User commands + dirs.push({ path: getUserCommandsDir() }); + + // 2. Project commands (override user commands) + dirs.push({ path: getProjectCommandsDir(this.projectRoot) }); + + // 3. Extension commands (processed last to detect all conflicts) + if (this.config) { + const activeExtensions = this.config + .getExtensions() + .filter((ext) => ext.isActive) + .sort((a, b) => a.name.localeCompare(b.name)); // Sort alphabetically for deterministic loading + + const extensionCommandDirs = activeExtensions.map((ext) => ({ + path: path.join(ext.path, 'commands'), + extensionName: ext.name, + })); + + dirs.push(...extensionCommandDirs); + } + + return dirs; } /** * Parses a single .toml file and transforms it into a SlashCommand object. * @param filePath The absolute path to the .toml file. * @param baseDir The root command directory for name calculation. + * @param extensionName Optional extension name to prefix commands with. * @returns A promise resolving to a SlashCommand, or null if the file is invalid. */ private async parseAndAdaptFile( filePath: string, baseDir: string, + extensionName?: string, ): Promise { let fileContent: string; try { @@ -167,7 +208,7 @@ export class FileCommandLoader implements ICommandLoader { 0, relativePathWithExt.length - 5, // length of '.toml' ); - const commandName = relativePath + const baseCommandName = relativePath .split(path.sep) // Sanitize each path segment to prevent ambiguity. Since ':' is our // namespace separator, we replace any literal colons in filenames @@ -175,11 +216,18 @@ export class FileCommandLoader implements ICommandLoader { .map((segment) => segment.replaceAll(':', '_')) .join(':'); + // Add extension name tag for extension commands + const defaultDescription = `Custom command from ${path.basename(filePath)}`; + let description = validDef.description || defaultDescription; + if (extensionName) { + description = `[${extensionName}] ${description}`; + } + const processors: IPromptProcessor[] = []; // Add the Shell Processor if needed. if (validDef.prompt.includes(SHELL_INJECTION_TRIGGER)) { - processors.push(new ShellProcessor(commandName)); + processors.push(new ShellProcessor(baseCommandName)); } // The presence of '{{args}}' is the switch that determines the behavior. @@ -190,18 +238,17 @@ export class FileCommandLoader implements ICommandLoader { } return { - name: commandName, - description: - validDef.description || - `Custom command from ${path.basename(filePath)}`, + name: baseCommandName, + description, kind: CommandKind.FILE, + extensionName, action: async ( context: CommandContext, _args: string, ): Promise => { if (!context.invocation) { console.error( - `[FileCommandLoader] Critical error: Command '${commandName}' was executed without invocation context.`, + `[FileCommandLoader] Critical error: Command '${baseCommandName}' was executed without invocation context.`, ); return { type: 'submit_prompt', diff --git a/packages/cli/src/test-utils/customMatchers.ts b/packages/cli/src/test-utils/customMatchers.ts new file mode 100644 index 00000000..c0b4df6b --- /dev/null +++ b/packages/cli/src/test-utils/customMatchers.ts @@ -0,0 +1,63 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +/// + +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import { expect } from 'vitest'; +import type { TextBuffer } from '../ui/components/shared/text-buffer.js'; + +// RegExp to detect invalid characters: backspace, and ANSI escape codes +// eslint-disable-next-line no-control-regex +const invalidCharsRegex = /[\b\x1b]/; + +function toHaveOnlyValidCharacters(this: vi.Assertion, buffer: TextBuffer) { + const { isNot } = this; + let pass = true; + const invalidLines: Array<{ line: number; content: string }> = []; + + for (let i = 0; i < buffer.lines.length; i++) { + const line = buffer.lines[i]; + if (line.includes('\n')) { + pass = false; + invalidLines.push({ line: i, content: line }); + break; // Fail fast on newlines + } + if (invalidCharsRegex.test(line)) { + pass = false; + invalidLines.push({ line: i, content: line }); + } + } + + return { + pass, + message: () => + `Expected buffer ${isNot ? 'not ' : ''}to have only valid characters, but found invalid characters in lines:\n${invalidLines + .map((l) => ` [${l.line}]: "${l.content}"`) /* This line was changed */ + .join('\n')}`, + actual: buffer.lines, + expected: 'Lines with no line breaks, backspaces, or escape codes.', + }; +} + +expect.extend({ + toHaveOnlyValidCharacters, +}); + +// Extend Vitest's `expect` interface with the custom matcher's type definition. +declare module 'vitest' { + interface Assertion { + toHaveOnlyValidCharacters(): T; + } + interface AsymmetricMatchersContaining { + toHaveOnlyValidCharacters(): void; + } +} diff --git a/packages/cli/src/test-utils/mockCommandContext.ts b/packages/cli/src/test-utils/mockCommandContext.ts index 029d0350..1831c88b 100644 --- a/packages/cli/src/test-utils/mockCommandContext.ts +++ b/packages/cli/src/test-utils/mockCommandContext.ts @@ -53,8 +53,10 @@ export const createMockCommandContext = ( setPendingItem: vi.fn(), loadHistory: vi.fn(), toggleCorgiMode: vi.fn(), + toggleVimEnabled: vi.fn(), }, session: { + sessionShellAllowlist: new Set(), stats: { sessionStartTime: new Date(), lastPromptTokenCount: 0, diff --git a/packages/cli/src/ui/App.test.tsx b/packages/cli/src/ui/App.test.tsx index 5c629fed..935c72c5 100644 --- a/packages/cli/src/ui/App.test.tsx +++ b/packages/cli/src/ui/App.test.tsx @@ -23,6 +23,10 @@ import { useGeminiStream } from './hooks/useGeminiStream.js'; import { useConsoleMessages } from './hooks/useConsoleMessages.js'; import { StreamingState, ConsoleMessageItem } from './types.js'; import { Tips } from './components/Tips.js'; +import { checkForUpdates, UpdateObject } from './utils/updateCheck.js'; +import { EventEmitter } from 'events'; +import { updateEventEmitter } from '../utils/updateEventEmitter.js'; +import * as auth from '../config/auth.js'; // Define a more complete mock server config based on actual Config interface MockServerConfig { @@ -148,13 +152,17 @@ vi.mock('@qwen-code/qwen-code-core', async (importOriginal) => { setFlashFallbackHandler: vi.fn(), getSessionId: vi.fn(() => 'test-session-id'), getUserTier: vi.fn().mockResolvedValue(undefined), + getIdeModeFeature: vi.fn(() => false), getIdeMode: vi.fn(() => false), + getWorkspaceContext: vi.fn(() => ({ + getDirectories: vi.fn(() => []), + })), }; }); const ideContextMock = { - getOpenFilesContext: vi.fn(), - subscribeToOpenFiles: vi.fn(() => vi.fn()), // subscribe returns an unsubscribe function + getIdeContext: vi.fn(), + subscribeToIdeContext: vi.fn(() => vi.fn()), // subscribe returns an unsubscribe function }; return { @@ -163,6 +171,7 @@ vi.mock('@qwen-code/qwen-code-core', async (importOriginal) => { MCPServerConfig: actualCore.MCPServerConfig, getAllGeminiMdFilenames: vi.fn(() => ['GEMINI.md']), ideContext: ideContextMock, + isGitRepository: vi.fn(), }; }); @@ -220,6 +229,21 @@ vi.mock('./components/Header.js', () => ({ Header: vi.fn(() => null), })); +vi.mock('./utils/updateCheck.js', () => ({ + checkForUpdates: vi.fn(), +})); + +vi.mock('./config/auth.js', () => ({ + validateAuthMethod: vi.fn(), +})); + +const mockedCheckForUpdates = vi.mocked(checkForUpdates); +const { isGitRepository: mockedIsGitRepository } = vi.mocked( + await import('@qwen-code/qwen-code-core'), +); + +vi.mock('node:child_process'); + describe('App UI', () => { let mockConfig: MockServerConfig; let mockSettings: LoadedSettings; @@ -277,7 +301,14 @@ describe('App UI', () => { // Ensure a theme is set so the theme dialog does not appear. mockSettings = createMockSettings({ workspace: { theme: 'Default' } }); - vi.mocked(ideContext.getOpenFilesContext).mockReturnValue(undefined); + + // Ensure getWorkspaceContext is available if not added by the constructor + if (!mockConfig.getWorkspaceContext) { + mockConfig.getWorkspaceContext = vi.fn(() => ({ + getDirectories: vi.fn(() => ['/test/dir']), + })); + } + vi.mocked(ideContext.getIdeContext).mockReturnValue(undefined); }); afterEach(() => { @@ -288,11 +319,181 @@ describe('App UI', () => { vi.clearAllMocks(); // Clear mocks after each test }); + describe('handleAutoUpdate', () => { + let spawnEmitter: EventEmitter; + + beforeEach(async () => { + const { spawn } = await import('node:child_process'); + spawnEmitter = new EventEmitter(); + spawnEmitter.stdout = new EventEmitter(); + spawnEmitter.stderr = new EventEmitter(); + (spawn as vi.Mock).mockReturnValue(spawnEmitter); + }); + + afterEach(() => { + delete process.env.GEMINI_CLI_DISABLE_AUTOUPDATER; + }); + + it('should not start the update process when running from git', async () => { + mockedIsGitRepository.mockResolvedValue(true); + const info: UpdateObject = { + update: { + name: '@qwen-code/qwen-code', + latest: '1.1.0', + current: '1.0.0', + }, + message: 'Qwen Code update available!', + }; + mockedCheckForUpdates.mockResolvedValue(info); + const { spawn } = await import('node:child_process'); + + const { unmount } = render( + , + ); + currentUnmount = unmount; + + await new Promise((resolve) => setTimeout(resolve, 10)); + + expect(spawn).not.toHaveBeenCalled(); + }); + + it('should show a success message when update succeeds', async () => { + mockedIsGitRepository.mockResolvedValue(false); + const info: UpdateObject = { + update: { + name: '@qwen-code/qwen-code', + latest: '1.1.0', + current: '1.0.0', + }, + message: 'Update available', + }; + mockedCheckForUpdates.mockResolvedValue(info); + + const { lastFrame, unmount } = render( + , + ); + currentUnmount = unmount; + + updateEventEmitter.emit('update-success', info); + + await new Promise((resolve) => setTimeout(resolve, 10)); + + expect(lastFrame()).toContain( + 'Update successful! The new version will be used on your next run.', + ); + }); + + it('should show an error message when update fails', async () => { + mockedIsGitRepository.mockResolvedValue(false); + const info: UpdateObject = { + update: { + name: '@qwen-code/qwen-code', + latest: '1.1.0', + current: '1.0.0', + }, + message: 'Update available', + }; + mockedCheckForUpdates.mockResolvedValue(info); + + const { lastFrame, unmount } = render( + , + ); + currentUnmount = unmount; + + updateEventEmitter.emit('update-failed', info); + + await new Promise((resolve) => setTimeout(resolve, 10)); + + expect(lastFrame()).toContain( + 'Automatic update failed. Please try updating manually', + ); + }); + + it('should show an error message when spawn fails', async () => { + mockedIsGitRepository.mockResolvedValue(false); + const info: UpdateObject = { + update: { + name: '@qwen-code/qwen-code', + latest: '1.1.0', + current: '1.0.0', + }, + message: 'Update available', + }; + mockedCheckForUpdates.mockResolvedValue(info); + + const { lastFrame, unmount } = render( + , + ); + currentUnmount = unmount; + + // We are testing the App's reaction to an `update-failed` event, + // which is what should be emitted when a spawn error occurs elsewhere. + updateEventEmitter.emit('update-failed', info); + + await new Promise((resolve) => setTimeout(resolve, 10)); + + expect(lastFrame()).toContain( + 'Automatic update failed. Please try updating manually', + ); + }); + + it('should not auto-update if GEMINI_CLI_DISABLE_AUTOUPDATER is true', async () => { + mockedIsGitRepository.mockResolvedValue(false); + process.env.GEMINI_CLI_DISABLE_AUTOUPDATER = 'true'; + const info: UpdateObject = { + update: { + name: '@qwen-code/qwen-code', + latest: '1.1.0', + current: '1.0.0', + }, + message: 'Update available', + }; + mockedCheckForUpdates.mockResolvedValue(info); + const { spawn } = await import('node:child_process'); + + const { unmount } = render( + , + ); + currentUnmount = unmount; + + await new Promise((resolve) => setTimeout(resolve, 10)); + + expect(spawn).not.toHaveBeenCalled(); + }); + }); + it('should display active file when available', async () => { - vi.mocked(ideContext.getOpenFilesContext).mockReturnValue({ - activeFile: '/path/to/my-file.ts', - recentOpenFiles: [{ filePath: '/path/to/my-file.ts', content: 'hello' }], - selectedText: 'hello', + vi.mocked(ideContext.getIdeContext).mockReturnValue({ + workspaceState: { + openFiles: [ + { + path: '/path/to/my-file.ts', + isActive: true, + selectedText: 'hello', + timestamp: 0, + }, + ], + }, }); const { lastFrame, unmount } = render( @@ -304,12 +505,14 @@ describe('App UI', () => { ); currentUnmount = unmount; await Promise.resolve(); - expect(lastFrame()).toContain('1 recent file (ctrl+e to view)'); + expect(lastFrame()).toContain('1 open file (ctrl+e to view)'); }); - it('should not display active file when not available', async () => { - vi.mocked(ideContext.getOpenFilesContext).mockReturnValue({ - activeFile: '', + it('should not display any files when not available', async () => { + vi.mocked(ideContext.getIdeContext).mockReturnValue({ + workspaceState: { + openFiles: [], + }, }); const { lastFrame, unmount } = render( @@ -324,11 +527,54 @@ describe('App UI', () => { expect(lastFrame()).not.toContain('Open File'); }); + it('should display active file and other open files', async () => { + vi.mocked(ideContext.getIdeContext).mockReturnValue({ + workspaceState: { + openFiles: [ + { + path: '/path/to/my-file.ts', + isActive: true, + selectedText: 'hello', + timestamp: 0, + }, + { + path: '/path/to/another-file.ts', + isActive: false, + timestamp: 1, + }, + { + path: '/path/to/third-file.ts', + isActive: false, + timestamp: 2, + }, + ], + }, + }); + + const { lastFrame, unmount } = render( + , + ); + currentUnmount = unmount; + await Promise.resolve(); + expect(lastFrame()).toContain('3 open files (ctrl+e to view)'); + }); + it('should display active file and other context', async () => { - vi.mocked(ideContext.getOpenFilesContext).mockReturnValue({ - activeFile: '/path/to/my-file.ts', - recentOpenFiles: [{ filePath: '/path/to/my-file.ts', content: 'hello' }], - selectedText: 'hello', + vi.mocked(ideContext.getIdeContext).mockReturnValue({ + workspaceState: { + openFiles: [ + { + path: '/path/to/my-file.ts', + isActive: true, + selectedText: 'hello', + timestamp: 0, + }, + ], + }, }); mockConfig.getGeminiMdFileCount.mockReturnValue(1); mockConfig.getAllGeminiMdFilenames.mockReturnValue(['GEMINI.md']); @@ -343,7 +589,7 @@ describe('App UI', () => { currentUnmount = unmount; await Promise.resolve(); expect(lastFrame()).toContain( - 'Using: 1 recent file (ctrl+e to view) | 1 GEMINI.md file', + 'Using: 1 open file (ctrl+e to view) | 1 GEMINI.md file', ); }); @@ -764,4 +1010,50 @@ describe('App UI', () => { expect(lastFrame()).toContain('5 errors'); }); }); + + describe('auth validation', () => { + it('should call validateAuthMethod when useExternalAuth is false', async () => { + const validateAuthMethodSpy = vi.spyOn(auth, 'validateAuthMethod'); + mockSettings = createMockSettings({ + workspace: { + selectedAuthType: 'USE_GEMINI' as AuthType, + useExternalAuth: false, + theme: 'Default', + }, + }); + + const { unmount } = render( + , + ); + currentUnmount = unmount; + + expect(validateAuthMethodSpy).toHaveBeenCalledWith('USE_GEMINI'); + }); + + it('should NOT call validateAuthMethod when useExternalAuth is true', async () => { + const validateAuthMethodSpy = vi.spyOn(auth, 'validateAuthMethod'); + mockSettings = createMockSettings({ + workspace: { + selectedAuthType: 'USE_GEMINI' as AuthType, + useExternalAuth: true, + theme: 'Default', + }, + }); + + const { unmount } = render( + , + ); + currentUnmount = unmount; + + expect(validateAuthMethodSpy).not.toHaveBeenCalled(); + }); + }); }); diff --git a/packages/cli/src/ui/App.tsx b/packages/cli/src/ui/App.tsx index d7745b14..6eec40bf 100644 --- a/packages/cli/src/ui/App.tsx +++ b/packages/cli/src/ui/App.tsx @@ -38,7 +38,6 @@ import { AuthInProgress } from './components/AuthInProgress.js'; import { EditorSettingsDialog } from './components/EditorSettingsDialog.js'; import { ShellConfirmationDialog } from './components/ShellConfirmationDialog.js'; import { Colors } from './colors.js'; -import { Help } from './components/Help.js'; import { loadHierarchicalGeminiMemory } from '../config/config.js'; import { LoadedSettings } from '../config/settings.js'; import { Tips } from './components/Tips.js'; @@ -60,7 +59,7 @@ import { FlashFallbackEvent, logFlashFallback, AuthType, - type OpenFiles, + type IdeContext, ideContext, } from '@qwen-code/qwen-code-core'; import { validateAuthMethod } from '../config/auth.js'; @@ -83,11 +82,12 @@ import { isGenericQuotaExceededError, UserTierId, } from '@qwen-code/qwen-code-core'; -import { checkForUpdates } from './utils/updateCheck.js'; +import { UpdateObject } from './utils/updateCheck.js'; import ansiEscapes from 'ansi-escapes'; import { OverflowProvider } from './contexts/OverflowContext.js'; import { ShowMoreLines } from './components/ShowMoreLines.js'; import { PrivacyNotice } from './privacy/PrivacyNotice.js'; +import { setUpdateHandler } from '../utils/handleAutoUpdate.js'; import { appEvents, AppEvent } from '../utils/events.js'; const CTRL_EXIT_PROMPT_DURATION_MS = 1000; @@ -110,15 +110,16 @@ export const AppWrapper = (props: AppProps) => ( const App = ({ config, settings, startupWarnings = [], version }: AppProps) => { const isFocused = useFocus(); useBracketedPaste(); - const [updateMessage, setUpdateMessage] = useState(null); + const [updateInfo, setUpdateInfo] = useState(null); const { stdout } = useStdout(); const nightly = version.includes('nightly'); + const { history, addItem, clearItems, loadHistory } = useHistory(); useEffect(() => { - checkForUpdates().then(setUpdateMessage); - }, []); + const cleanup = setUpdateHandler(addItem, setUpdateInfo); + return cleanup; + }, [addItem]); - const { history, addItem, clearItems, loadHistory } = useHistory(); const { consoleMessages, handleNewMessage, @@ -144,7 +145,6 @@ const App = ({ config, settings, startupWarnings = [], version }: AppProps) => { const [geminiMdFileCount, setGeminiMdFileCount] = useState(0); const [debugMessage, setDebugMessage] = useState(''); - const [showHelp, setShowHelp] = useState(false); const [themeError, setThemeError] = useState(null); const [authError, setAuthError] = useState(null); const [editorError, setEditorError] = useState(null); @@ -169,13 +169,15 @@ const App = ({ config, settings, startupWarnings = [], version }: AppProps) => { const [modelSwitchedFromQuotaError, setModelSwitchedFromQuotaError] = useState(false); const [userTier, setUserTier] = useState(undefined); - const [openFiles, setOpenFiles] = useState(); + const [ideContextState, setIdeContextState] = useState< + IdeContext | undefined + >(); const [isProcessing, setIsProcessing] = useState(false); useEffect(() => { - const unsubscribe = ideContext.subscribeToOpenFiles(setOpenFiles); + const unsubscribe = ideContext.subscribeToIdeContext(setIdeContextState); // Set the initial value - setOpenFiles(ideContext.getOpenFilesContext()); + setIdeContextState(ideContext.getIdeContext()); return unsubscribe; }, []); @@ -230,14 +232,19 @@ const App = ({ config, settings, startupWarnings = [], version }: AppProps) => { } = useAuthCommand(settings, setAuthError, config); useEffect(() => { - if (settings.merged.selectedAuthType) { + if (settings.merged.selectedAuthType && !settings.merged.useExternalAuth) { const error = validateAuthMethod(settings.merged.selectedAuthType); if (error) { setAuthError(error); openAuthDialog(); } } - }, [settings.merged.selectedAuthType, openAuthDialog, setAuthError]); + }, [ + settings.merged.selectedAuthType, + settings.merged.useExternalAuth, + openAuthDialog, + setAuthError, + ]); // Sync user tier from config when authentication changes useEffect(() => { @@ -273,6 +280,7 @@ const App = ({ config, settings, startupWarnings = [], version }: AppProps) => { config.getFileService(), settings.merged, config.getExtensionContextFilePaths(), + settings.merged.memoryImportFormat || 'tree', // Use setting or default to 'tree' config.getFileFilteringOptions(), ); @@ -396,6 +404,7 @@ const App = ({ config, settings, startupWarnings = [], version }: AppProps) => { // Switch model for future use but return false to stop current retry config.setModel(fallbackModel); + config.setFallbackMode(true); logFlashFallback( config, new FlashFallbackEvent(config.getContentGeneratorConfig().authType!), @@ -462,7 +471,6 @@ const App = ({ config, settings, startupWarnings = [], version }: AppProps) => { clearItems, loadHistory, refreshStatic, - setShowHelp, setDebugMessage, openThemeDialog, openAuthDialog, @@ -484,7 +492,6 @@ const App = ({ config, settings, startupWarnings = [], version }: AppProps) => { config.getGeminiClient(), history, addItem, - setShowHelp, config, setDebugMessage, handleSlashCommand, @@ -568,7 +575,12 @@ const App = ({ config, settings, startupWarnings = [], version }: AppProps) => { if (Object.keys(mcpServers || {}).length > 0) { handleSlashCommand(newValue ? '/mcp desc' : '/mcp nodesc'); } - } else if (key.ctrl && input === 'e' && ideContext) { + } else if ( + key.ctrl && + input === 'e' && + config.getIdeMode() && + ideContextState + ) { setShowIDEContextDetail((prev) => !prev); } else if (key.ctrl && (input === 'c' || input === 'C')) { handleExit(ctrlCPressedOnce, setCtrlCPressedOnce, ctrlCTimerRef); @@ -754,9 +766,6 @@ const App = ({ config, settings, startupWarnings = [], version }: AppProps) => { return ( - {/* Move UpdateNotification outside Static so it can re-render when updateMessage changes */} - {updateMessage && } - {/* * The Static component is an Ink intrinsic in which there can only be 1 per application. * Because of this restriction we're hacking it slightly by having a 'header' item here to @@ -789,6 +798,7 @@ const App = ({ config, settings, startupWarnings = [], version }: AppProps) => { item={h} isPending={false} config={config} + commands={slashCommands} /> )), ]} @@ -816,9 +826,9 @@ const App = ({ config, settings, startupWarnings = [], version }: AppProps) => { - {showHelp && } - + {/* Move UpdateNotification to render update notification above input area */} + {updateInfo && } {startupWarnings.length > 0 && ( { ) : ( { {showIDEContextDetail && ( - + )} {showErrorDetails && ( diff --git a/packages/cli/src/ui/commands/aboutCommand.test.ts b/packages/cli/src/ui/commands/aboutCommand.test.ts index 48dd6db3..43cd59ec 100644 --- a/packages/cli/src/ui/commands/aboutCommand.test.ts +++ b/packages/cli/src/ui/commands/aboutCommand.test.ts @@ -62,6 +62,7 @@ describe('aboutCommand', () => { }); it('should call addItem with all version info', async () => { + process.env.SANDBOX = ''; if (!aboutCommand.action) { throw new Error('The about command must have an action.'); } diff --git a/packages/cli/src/ui/commands/chatCommand.test.ts b/packages/cli/src/ui/commands/chatCommand.test.ts index 7b331d1d..533e697d 100644 --- a/packages/cli/src/ui/commands/chatCommand.test.ts +++ b/packages/cli/src/ui/commands/chatCommand.test.ts @@ -40,14 +40,17 @@ describe('chatCommand', () => { let mockGetChat: ReturnType; let mockSaveCheckpoint: ReturnType; let mockLoadCheckpoint: ReturnType; + let mockDeleteCheckpoint: ReturnType; let mockGetHistory: ReturnType; - const getSubCommand = (name: 'list' | 'save' | 'resume'): SlashCommand => { + const getSubCommand = ( + name: 'list' | 'save' | 'resume' | 'delete', + ): SlashCommand => { const subCommand = chatCommand.subCommands?.find( (cmd) => cmd.name === name, ); if (!subCommand) { - throw new Error(`/memory ${name} command not found.`); + throw new Error(`/chat ${name} command not found.`); } return subCommand; }; @@ -59,6 +62,7 @@ describe('chatCommand', () => { }); mockSaveCheckpoint = vi.fn().mockResolvedValue(undefined); mockLoadCheckpoint = vi.fn().mockResolvedValue([]); + mockDeleteCheckpoint = vi.fn().mockResolvedValue(true); mockContext = createMockCommandContext({ services: { @@ -72,6 +76,7 @@ describe('chatCommand', () => { logger: { saveCheckpoint: mockSaveCheckpoint, loadCheckpoint: mockLoadCheckpoint, + deleteCheckpoint: mockDeleteCheckpoint, initialize: vi.fn().mockResolvedValue(undefined), }, }, @@ -85,7 +90,7 @@ describe('chatCommand', () => { it('should have the correct main command definition', () => { expect(chatCommand.name).toBe('chat'); expect(chatCommand.description).toBe('Manage conversation history.'); - expect(chatCommand.subCommands).toHaveLength(3); + expect(chatCommand.subCommands).toHaveLength(4); }); describe('list subcommand', () => { @@ -297,4 +302,63 @@ describe('chatCommand', () => { }); }); }); + + describe('delete subcommand', () => { + let deleteCommand: SlashCommand; + const tag = 'my-tag'; + beforeEach(() => { + deleteCommand = getSubCommand('delete'); + }); + + it('should return an error if tag is missing', async () => { + const result = await deleteCommand?.action?.(mockContext, ' '); + expect(result).toEqual({ + type: 'message', + messageType: 'error', + content: 'Missing tag. Usage: /chat delete ', + }); + }); + + it('should return an error if checkpoint is not found', async () => { + mockDeleteCheckpoint.mockResolvedValue(false); + const result = await deleteCommand?.action?.(mockContext, tag); + expect(result).toEqual({ + type: 'message', + messageType: 'error', + content: `Error: No checkpoint found with tag '${tag}'.`, + }); + }); + + it('should delete the conversation', async () => { + const result = await deleteCommand?.action?.(mockContext, tag); + + expect(mockDeleteCheckpoint).toHaveBeenCalledWith(tag); + expect(result).toEqual({ + type: 'message', + messageType: 'info', + content: `Conversation checkpoint '${tag}' has been deleted.`, + }); + }); + + describe('completion', () => { + it('should provide completion suggestions', async () => { + const fakeFiles = ['checkpoint-alpha.json', 'checkpoint-beta.json']; + mockFs.readdir.mockImplementation( + (async (_: string): Promise => + fakeFiles as string[]) as unknown as typeof fsPromises.readdir, + ); + + mockFs.stat.mockImplementation( + (async (_: string): Promise => + ({ + mtime: new Date(), + }) as Stats) as unknown as typeof fsPromises.stat, + ); + + const result = await deleteCommand?.completion?.(mockContext, 'a'); + + expect(result).toEqual(['alpha']); + }); + }); + }); }); diff --git a/packages/cli/src/ui/commands/chatCommand.ts b/packages/cli/src/ui/commands/chatCommand.ts index 739097e3..a5fa13da 100644 --- a/packages/cli/src/ui/commands/chatCommand.ts +++ b/packages/cli/src/ui/commands/chatCommand.ts @@ -206,9 +206,49 @@ const resumeCommand: SlashCommand = { }, }; +const deleteCommand: SlashCommand = { + name: 'delete', + description: 'Delete a conversation checkpoint. Usage: /chat delete ', + kind: CommandKind.BUILT_IN, + action: async (context, args): Promise => { + const tag = args.trim(); + if (!tag) { + return { + type: 'message', + messageType: 'error', + content: 'Missing tag. Usage: /chat delete ', + }; + } + + const { logger } = context.services; + await logger.initialize(); + const deleted = await logger.deleteCheckpoint(tag); + + if (deleted) { + return { + type: 'message', + messageType: 'info', + content: `Conversation checkpoint '${tag}' has been deleted.`, + }; + } else { + return { + type: 'message', + messageType: 'error', + content: `Error: No checkpoint found with tag '${tag}'.`, + }; + } + }, + completion: async (context, partialArg) => { + const chatDetails = await getSavedChatTags(context, true); + return chatDetails + .map((chat) => chat.name) + .filter((name) => name.startsWith(partialArg)); + }, +}; + export const chatCommand: SlashCommand = { name: 'chat', description: 'Manage conversation history.', kind: CommandKind.BUILT_IN, - subCommands: [listCommand, saveCommand, resumeCommand], + subCommands: [listCommand, saveCommand, resumeCommand, deleteCommand], }; diff --git a/packages/cli/src/ui/commands/directoryCommand.test.tsx b/packages/cli/src/ui/commands/directoryCommand.test.tsx new file mode 100644 index 00000000..14b826ea --- /dev/null +++ b/packages/cli/src/ui/commands/directoryCommand.test.tsx @@ -0,0 +1,172 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import { describe, it, expect, vi, beforeEach } from 'vitest'; +import { directoryCommand, expandHomeDir } from './directoryCommand.js'; +import { Config, WorkspaceContext } from '@qwen-code/qwen-code-core'; +import { CommandContext } from './types.js'; +import { MessageType } from '../types.js'; +import * as os from 'os'; +import * as path from 'path'; + +describe('directoryCommand', () => { + let mockContext: CommandContext; + let mockConfig: Config; + let mockWorkspaceContext: WorkspaceContext; + const addCommand = directoryCommand.subCommands?.find( + (c) => c.name === 'add', + ); + const showCommand = directoryCommand.subCommands?.find( + (c) => c.name === 'show', + ); + + beforeEach(() => { + mockWorkspaceContext = { + addDirectory: vi.fn(), + getDirectories: vi + .fn() + .mockReturnValue([ + path.normalize('/home/user/project1'), + path.normalize('/home/user/project2'), + ]), + } as unknown as WorkspaceContext; + + mockConfig = { + getWorkspaceContext: () => mockWorkspaceContext, + isRestrictiveSandbox: vi.fn().mockReturnValue(false), + getGeminiClient: vi.fn().mockReturnValue({ + addDirectoryContext: vi.fn(), + }), + } as unknown as Config; + + mockContext = { + services: { + config: mockConfig, + }, + ui: { + addItem: vi.fn(), + }, + } as unknown as CommandContext; + }); + + describe('show', () => { + it('should display the list of directories', () => { + if (!showCommand?.action) throw new Error('No action'); + showCommand.action(mockContext, ''); + expect(mockWorkspaceContext.getDirectories).toHaveBeenCalled(); + expect(mockContext.ui.addItem).toHaveBeenCalledWith( + expect.objectContaining({ + type: MessageType.INFO, + text: `Current workspace directories:\n- ${path.normalize( + '/home/user/project1', + )}\n- ${path.normalize('/home/user/project2')}`, + }), + expect.any(Number), + ); + }); + }); + + describe('add', () => { + it('should show an error if no path is provided', () => { + if (!addCommand?.action) throw new Error('No action'); + addCommand.action(mockContext, ''); + expect(mockContext.ui.addItem).toHaveBeenCalledWith( + expect.objectContaining({ + type: MessageType.ERROR, + text: 'Please provide at least one path to add.', + }), + expect.any(Number), + ); + }); + + it('should call addDirectory and show a success message for a single path', async () => { + const newPath = path.normalize('/home/user/new-project'); + if (!addCommand?.action) throw new Error('No action'); + await addCommand.action(mockContext, newPath); + expect(mockWorkspaceContext.addDirectory).toHaveBeenCalledWith(newPath); + expect(mockContext.ui.addItem).toHaveBeenCalledWith( + expect.objectContaining({ + type: MessageType.INFO, + text: `Successfully added directories:\n- ${newPath}`, + }), + expect.any(Number), + ); + }); + + it('should call addDirectory for each path and show a success message for multiple paths', async () => { + const newPath1 = path.normalize('/home/user/new-project1'); + const newPath2 = path.normalize('/home/user/new-project2'); + if (!addCommand?.action) throw new Error('No action'); + await addCommand.action(mockContext, `${newPath1},${newPath2}`); + expect(mockWorkspaceContext.addDirectory).toHaveBeenCalledWith(newPath1); + expect(mockWorkspaceContext.addDirectory).toHaveBeenCalledWith(newPath2); + expect(mockContext.ui.addItem).toHaveBeenCalledWith( + expect.objectContaining({ + type: MessageType.INFO, + text: `Successfully added directories:\n- ${newPath1}\n- ${newPath2}`, + }), + expect.any(Number), + ); + }); + + it('should show an error if addDirectory throws an exception', async () => { + const error = new Error('Directory does not exist'); + vi.mocked(mockWorkspaceContext.addDirectory).mockImplementation(() => { + throw error; + }); + const newPath = path.normalize('/home/user/invalid-project'); + if (!addCommand?.action) throw new Error('No action'); + await addCommand.action(mockContext, newPath); + expect(mockContext.ui.addItem).toHaveBeenCalledWith( + expect.objectContaining({ + type: MessageType.ERROR, + text: `Error adding '${newPath}': ${error.message}`, + }), + expect.any(Number), + ); + }); + + it('should handle a mix of successful and failed additions', async () => { + const validPath = path.normalize('/home/user/valid-project'); + const invalidPath = path.normalize('/home/user/invalid-project'); + const error = new Error('Directory does not exist'); + vi.mocked(mockWorkspaceContext.addDirectory).mockImplementation( + (p: string) => { + if (p === invalidPath) { + throw error; + } + }, + ); + + if (!addCommand?.action) throw new Error('No action'); + await addCommand.action(mockContext, `${validPath},${invalidPath}`); + + expect(mockContext.ui.addItem).toHaveBeenCalledWith( + expect.objectContaining({ + type: MessageType.INFO, + text: `Successfully added directories:\n- ${validPath}`, + }), + expect.any(Number), + ); + + expect(mockContext.ui.addItem).toHaveBeenCalledWith( + expect.objectContaining({ + type: MessageType.ERROR, + text: `Error adding '${invalidPath}': ${error.message}`, + }), + expect.any(Number), + ); + }); + }); + it('should correctly expand a Windows-style home directory path', () => { + const windowsPath = '%userprofile%\\Documents'; + const expectedPath = path.win32.join(os.homedir(), 'Documents'); + const result = expandHomeDir(windowsPath); + expect(path.win32.normalize(result)).toBe( + path.win32.normalize(expectedPath), + ); + }); +}); diff --git a/packages/cli/src/ui/commands/directoryCommand.tsx b/packages/cli/src/ui/commands/directoryCommand.tsx new file mode 100644 index 00000000..18f7e78f --- /dev/null +++ b/packages/cli/src/ui/commands/directoryCommand.tsx @@ -0,0 +1,150 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import { SlashCommand, CommandContext, CommandKind } from './types.js'; +import { MessageType } from '../types.js'; +import * as os from 'os'; +import * as path from 'path'; + +export function expandHomeDir(p: string): string { + if (!p) { + return ''; + } + let expandedPath = p; + if (p.toLowerCase().startsWith('%userprofile%')) { + expandedPath = os.homedir() + p.substring('%userprofile%'.length); + } else if (p.startsWith('~')) { + expandedPath = os.homedir() + p.substring(1); + } + return path.normalize(expandedPath); +} + +export const directoryCommand: SlashCommand = { + name: 'directory', + altNames: ['dir'], + description: 'Manage workspace directories', + kind: CommandKind.BUILT_IN, + subCommands: [ + { + name: 'add', + description: + 'Add directories to the workspace. Use comma to separate multiple paths', + kind: CommandKind.BUILT_IN, + action: async (context: CommandContext, args: string) => { + const { + ui: { addItem }, + services: { config }, + } = context; + const [...rest] = args.split(' '); + + if (!config) { + addItem( + { + type: MessageType.ERROR, + text: 'Configuration is not available.', + }, + Date.now(), + ); + return; + } + + const workspaceContext = config.getWorkspaceContext(); + + const pathsToAdd = rest + .join(' ') + .split(',') + .filter((p) => p); + if (pathsToAdd.length === 0) { + addItem( + { + type: MessageType.ERROR, + text: 'Please provide at least one path to add.', + }, + Date.now(), + ); + return; + } + + if (config.isRestrictiveSandbox()) { + return { + type: 'message' as const, + messageType: 'error' as const, + content: + 'The /directory add command is not supported in restrictive sandbox profiles. Please use --include-directories when starting the session instead.', + }; + } + + const added: string[] = []; + const errors: string[] = []; + + for (const pathToAdd of pathsToAdd) { + try { + workspaceContext.addDirectory(expandHomeDir(pathToAdd.trim())); + added.push(pathToAdd.trim()); + } catch (e) { + const error = e as Error; + errors.push(`Error adding '${pathToAdd.trim()}': ${error.message}`); + } + } + + if (added.length > 0) { + const gemini = config.getGeminiClient(); + if (gemini) { + await gemini.addDirectoryContext(); + } + addItem( + { + type: MessageType.INFO, + text: `Successfully added directories:\n- ${added.join('\n- ')}`, + }, + Date.now(), + ); + } + + if (errors.length > 0) { + addItem( + { + type: MessageType.ERROR, + text: errors.join('\n'), + }, + Date.now(), + ); + } + }, + }, + { + name: 'show', + description: 'Show all directories in the workspace', + kind: CommandKind.BUILT_IN, + action: async (context: CommandContext) => { + const { + ui: { addItem }, + services: { config }, + } = context; + if (!config) { + addItem( + { + type: MessageType.ERROR, + text: 'Configuration is not available.', + }, + Date.now(), + ); + return; + } + const workspaceContext = config.getWorkspaceContext(); + const directories = workspaceContext.getDirectories(); + const directoryList = directories.map((dir) => `- ${dir}`).join('\n'); + addItem( + { + type: MessageType.INFO, + text: `Current workspace directories:\n${directoryList}`, + }, + Date.now(), + ); + }, + }, + ], +}; diff --git a/packages/cli/src/ui/commands/helpCommand.test.ts b/packages/cli/src/ui/commands/helpCommand.test.ts index b0441106..e956d1c5 100644 --- a/packages/cli/src/ui/commands/helpCommand.test.ts +++ b/packages/cli/src/ui/commands/helpCommand.test.ts @@ -4,37 +4,49 @@ * SPDX-License-Identifier: Apache-2.0 */ -import { vi, describe, it, expect, beforeEach } from 'vitest'; +import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest'; import { helpCommand } from './helpCommand.js'; import { type CommandContext } from './types.js'; +import { createMockCommandContext } from '../../test-utils/mockCommandContext.js'; +import { MessageType } from '../types.js'; +import { CommandKind } from './types.js'; describe('helpCommand', () => { let mockContext: CommandContext; + const originalEnv = { ...process.env }; beforeEach(() => { - mockContext = {} as unknown as CommandContext; + mockContext = createMockCommandContext({ + ui: { + addItem: vi.fn(), + }, + } as unknown as CommandContext); }); - it("should return a dialog action and log a debug message for '/help'", () => { - const consoleDebugSpy = vi - .spyOn(console, 'debug') - .mockImplementation(() => {}); + afterEach(() => { + process.env = { ...originalEnv }; + vi.clearAllMocks(); + }); + + it('should add a help message to the UI history', async () => { if (!helpCommand.action) { throw new Error('Help command has no action'); } - const result = helpCommand.action(mockContext, ''); - expect(result).toEqual({ - type: 'dialog', - dialog: 'help', - }); - expect(consoleDebugSpy).toHaveBeenCalledWith('Opening help UI ...'); + await helpCommand.action(mockContext, ''); + + expect(mockContext.ui.addItem).toHaveBeenCalledWith( + expect.objectContaining({ + type: MessageType.HELP, + timestamp: expect.any(Date), + }), + expect.any(Number), + ); }); - it("should also be triggered by its alternative name '?'", () => { - // This test is more conceptual. The routing of altNames to the command - // is handled by the slash command processor, but we can assert the - // altNames is correctly defined on the command object itself. - expect(helpCommand.altNames).toContain('?'); + it('should have the correct command properties', () => { + expect(helpCommand.name).toBe('help'); + expect(helpCommand.kind).toBe(CommandKind.BUILT_IN); + expect(helpCommand.description).toBe('for help on Qwen Code'); }); }); diff --git a/packages/cli/src/ui/commands/helpCommand.ts b/packages/cli/src/ui/commands/helpCommand.ts index a0169309..59aae909 100644 --- a/packages/cli/src/ui/commands/helpCommand.ts +++ b/packages/cli/src/ui/commands/helpCommand.ts @@ -4,18 +4,20 @@ * SPDX-License-Identifier: Apache-2.0 */ -import { CommandKind, OpenDialogActionReturn, SlashCommand } from './types.js'; +import { CommandKind, SlashCommand } from './types.js'; +import { MessageType, type HistoryItemHelp } from '../types.js'; export const helpCommand: SlashCommand = { name: 'help', altNames: ['?'], - description: 'for help on Qwen Code', kind: CommandKind.BUILT_IN, - action: (_context, _args): OpenDialogActionReturn => { - console.debug('Opening help UI ...'); - return { - type: 'dialog', - dialog: 'help', + description: 'for help on Qwen Code', + action: async (context) => { + const helpItem: Omit = { + type: MessageType.HELP, + timestamp: new Date(), }; + + context.ui.addItem(helpItem, Date.now()); }, }; diff --git a/packages/cli/src/ui/commands/ideCommand.test.ts b/packages/cli/src/ui/commands/ideCommand.test.ts index 22464781..81238d91 100644 --- a/packages/cli/src/ui/commands/ideCommand.test.ts +++ b/packages/cli/src/ui/commands/ideCommand.test.ts @@ -15,24 +15,16 @@ import { } from 'vitest'; import { ideCommand } from './ideCommand.js'; import { type CommandContext } from './types.js'; -import { type Config } from '@qwen-code/qwen-code-core'; -import * as child_process from 'child_process'; -import { glob } from 'glob'; - -import { IDEConnectionStatus } from '@qwen-code/qwen-code-core/index.js'; +import { type Config, DetectedIde } from '@qwen-code/qwen-code-core'; +import * as core from '@qwen-code/qwen-code-core'; vi.mock('child_process'); vi.mock('glob'); - -function regexEscape(value: string) { - return value.replace(/[.*+?^${}()|[\]\\]/g, '\\$&'); -} +vi.mock('@qwen-code/qwen-code-core'); describe('ideCommand', () => { let mockContext: CommandContext; let mockConfig: Config; - let execSyncSpy: MockInstance; - let globSyncSpy: MockInstance; let platformSpy: MockInstance; beforeEach(() => { @@ -40,15 +32,21 @@ describe('ideCommand', () => { ui: { addItem: vi.fn(), }, + services: { + settings: { + setValue: vi.fn(), + }, + }, } as unknown as CommandContext; mockConfig = { + getIdeModeFeature: vi.fn(), getIdeMode: vi.fn(), getIdeClient: vi.fn(), + setIdeMode: vi.fn(), + setIdeClientDisconnected: vi.fn(), } as unknown as Config; - execSyncSpy = vi.spyOn(child_process, 'execSync'); - globSyncSpy = vi.spyOn(glob, 'sync'); platformSpy = vi.spyOn(process, 'platform', 'get'); }); @@ -56,51 +54,61 @@ describe('ideCommand', () => { vi.restoreAllMocks(); }); - it('should return null if ideMode is not enabled', () => { - vi.mocked(mockConfig.getIdeMode).mockReturnValue(false); + it('should return null if ideModeFeature is not enabled', () => { + vi.mocked(mockConfig.getIdeModeFeature).mockReturnValue(false); const command = ideCommand(mockConfig); expect(command).toBeNull(); }); - it('should return the ide command if ideMode is enabled', () => { + it('should return the ide command if ideModeFeature is enabled', () => { + vi.mocked(mockConfig.getIdeModeFeature).mockReturnValue(true); vi.mocked(mockConfig.getIdeMode).mockReturnValue(true); + vi.mocked(mockConfig.getIdeClient).mockReturnValue({ + getCurrentIde: () => DetectedIde.VSCode, + getDetectedIdeDisplayName: () => 'VS Code', + } as ReturnType); const command = ideCommand(mockConfig); expect(command).not.toBeNull(); expect(command?.name).toBe('ide'); - expect(command?.subCommands).toHaveLength(2); - expect(command?.subCommands?.[0].name).toBe('status'); - expect(command?.subCommands?.[1].name).toBe('install'); + expect(command?.subCommands).toHaveLength(3); + expect(command?.subCommands?.[0].name).toBe('disable'); + expect(command?.subCommands?.[1].name).toBe('status'); + expect(command?.subCommands?.[2].name).toBe('install'); }); describe('status subcommand', () => { const mockGetConnectionStatus = vi.fn(); beforeEach(() => { - vi.mocked(mockConfig.getIdeMode).mockReturnValue(true); + vi.mocked(mockConfig.getIdeModeFeature).mockReturnValue(true); vi.mocked(mockConfig.getIdeClient).mockReturnValue({ getConnectionStatus: mockGetConnectionStatus, - } as ReturnType); + getCurrentIde: () => DetectedIde.VSCode, + getDetectedIdeDisplayName: () => 'VS Code', + } as unknown as ReturnType); }); it('should show connected status', () => { mockGetConnectionStatus.mockReturnValue({ - status: IDEConnectionStatus.Connected, + status: core.IDEConnectionStatus.Connected, }); const command = ideCommand(mockConfig); - const result = command!.subCommands![0].action!(mockContext, ''); + const result = command!.subCommands!.find((c) => c.name === 'status')! + .action!(mockContext, ''); expect(mockGetConnectionStatus).toHaveBeenCalled(); expect(result).toEqual({ type: 'message', messageType: 'info', - content: '🟢 Connected', + content: '🟢 Connected to VS Code', }); }); it('should show connecting status', () => { mockGetConnectionStatus.mockReturnValue({ - status: IDEConnectionStatus.Connecting, + status: core.IDEConnectionStatus.Connecting, }); const command = ideCommand(mockConfig); - const result = command!.subCommands![0].action!(mockContext, ''); + const result = command!.subCommands!.find((c) => c.name === 'status')! + .action!(mockContext, ''); expect(mockGetConnectionStatus).toHaveBeenCalled(); expect(result).toEqual({ type: 'message', @@ -110,10 +118,11 @@ describe('ideCommand', () => { }); it('should show disconnected status', () => { mockGetConnectionStatus.mockReturnValue({ - status: IDEConnectionStatus.Disconnected, + status: core.IDEConnectionStatus.Disconnected, }); const command = ideCommand(mockConfig); - const result = command!.subCommands![0].action!(mockContext, ''); + const result = command!.subCommands!.find((c) => c.name === 'status')! + .action!(mockContext, ''); expect(mockGetConnectionStatus).toHaveBeenCalled(); expect(result).toEqual({ type: 'message', @@ -125,11 +134,12 @@ describe('ideCommand', () => { it('should show disconnected status with details', () => { const details = 'Something went wrong'; mockGetConnectionStatus.mockReturnValue({ - status: IDEConnectionStatus.Disconnected, + status: core.IDEConnectionStatus.Disconnected, details, }); const command = ideCommand(mockConfig); - const result = command!.subCommands![0].action!(mockContext, ''); + const result = command!.subCommands!.find((c) => c.name === 'status')! + .action!(mockContext, ''); expect(mockGetConnectionStatus).toHaveBeenCalled(); expect(result).toEqual({ type: 'message', @@ -140,128 +150,77 @@ describe('ideCommand', () => { }); describe('install subcommand', () => { + const mockInstall = vi.fn(); beforeEach(() => { + vi.mocked(mockConfig.getIdeModeFeature).mockReturnValue(true); vi.mocked(mockConfig.getIdeMode).mockReturnValue(true); + vi.mocked(mockConfig.getIdeClient).mockReturnValue({ + getCurrentIde: () => DetectedIde.VSCode, + getConnectionStatus: vi.fn(), + getDetectedIdeDisplayName: () => 'VS Code', + } as unknown as ReturnType); + vi.mocked(core.getIdeInstaller).mockReturnValue({ + install: mockInstall, + isInstalled: vi.fn(), + }); platformSpy.mockReturnValue('linux'); }); - it('should show an error if VSCode is not installed', async () => { - execSyncSpy.mockImplementation(() => { - throw new Error('Command not found'); + it('should install the extension', async () => { + mockInstall.mockResolvedValue({ + success: true, + message: 'Successfully installed.', }); const command = ideCommand(mockConfig); - - await command!.subCommands![1].action!(mockContext, ''); - expect(mockContext.ui.addItem).toHaveBeenCalledWith( - expect.objectContaining({ - type: 'error', - text: expect.stringMatching(/VS Code command-line tool .* not found/), - }), - expect.any(Number), + await command!.subCommands!.find((c) => c.name === 'install')!.action!( + mockContext, + '', ); - }); - it('should show an error if the VSIX file is not found', async () => { - execSyncSpy.mockReturnValue(''); // VSCode is installed - globSyncSpy.mockReturnValue([]); // No .vsix file found - - const command = ideCommand(mockConfig); - await command!.subCommands![1].action!(mockContext, ''); - - expect(mockContext.ui.addItem).toHaveBeenCalledWith( - expect.objectContaining({ - type: 'error', - text: 'Could not find the required VS Code companion extension. Please file a bug via /bug.', - }), - expect.any(Number), - ); - }); - - it('should install the extension if found in the bundle directory', async () => { - const vsixPath = '/path/to/bundle/gemini.vsix'; - execSyncSpy.mockReturnValue(''); // VSCode is installed - globSyncSpy.mockReturnValue([vsixPath]); // Found .vsix file - - const command = ideCommand(mockConfig); - await command!.subCommands![1].action!(mockContext, ''); - - expect(globSyncSpy).toHaveBeenCalledWith( - expect.stringContaining('.vsix'), - ); - expect(execSyncSpy).toHaveBeenCalledWith( - expect.stringMatching( - new RegExp( - `code(.cmd)? --install-extension ${regexEscape(vsixPath)} --force`, - ), - ), - { stdio: 'pipe' }, - ); + expect(core.getIdeInstaller).toHaveBeenCalledWith('vscode'); + expect(mockInstall).toHaveBeenCalled(); expect(mockContext.ui.addItem).toHaveBeenCalledWith( expect.objectContaining({ type: 'info', - text: `Installing VS Code companion extension...`, + text: `Installing IDE companion...`, }), expect.any(Number), ); expect(mockContext.ui.addItem).toHaveBeenCalledWith( expect.objectContaining({ type: 'info', - text: 'VS Code companion extension installed successfully. Restart gemini-cli in a fresh terminal window.', - }), - expect.any(Number), - ); - }); - - it('should install the extension if found in the dev directory', async () => { - const vsixPath = '/path/to/dev/gemini.vsix'; - execSyncSpy.mockReturnValue(''); // VSCode is installed - // First glob call for bundle returns nothing, second for dev returns path. - globSyncSpy.mockReturnValueOnce([]).mockReturnValueOnce([vsixPath]); - - const command = ideCommand(mockConfig); - await command!.subCommands![1].action!(mockContext, ''); - - expect(globSyncSpy).toHaveBeenCalledTimes(2); - expect(execSyncSpy).toHaveBeenCalledWith( - expect.stringMatching( - new RegExp( - `code(.cmd)? --install-extension ${regexEscape(vsixPath)} --force`, - ), - ), - { stdio: 'pipe' }, - ); - expect(mockContext.ui.addItem).toHaveBeenCalledWith( - expect.objectContaining({ - type: 'info', - text: 'VS Code companion extension installed successfully. Restart gemini-cli in a fresh terminal window.', + text: 'Successfully installed.', }), expect.any(Number), ); }); it('should show an error if installation fails', async () => { - const vsixPath = '/path/to/bundle/gemini.vsix'; - const errorMessage = 'Installation failed'; - execSyncSpy - .mockReturnValueOnce('') // VSCode is installed check - .mockImplementation(() => { - // Installation command - const error: Error & { stderr?: Buffer } = new Error( - 'Command failed', - ); - error.stderr = Buffer.from(errorMessage); - throw error; - }); - globSyncSpy.mockReturnValue([vsixPath]); + mockInstall.mockResolvedValue({ + success: false, + message: 'Installation failed.', + }); const command = ideCommand(mockConfig); - await command!.subCommands![1].action!(mockContext, ''); + await command!.subCommands!.find((c) => c.name === 'install')!.action!( + mockContext, + '', + ); + expect(core.getIdeInstaller).toHaveBeenCalledWith('vscode'); + expect(mockInstall).toHaveBeenCalled(); + expect(mockContext.ui.addItem).toHaveBeenCalledWith( + expect.objectContaining({ + type: 'info', + text: `Installing IDE companion...`, + }), + expect.any(Number), + ); expect(mockContext.ui.addItem).toHaveBeenCalledWith( expect.objectContaining({ type: 'error', - text: `Failed to install VS Code companion extension.`, + text: 'Installation failed.', }), expect.any(Number), ); diff --git a/packages/cli/src/ui/commands/ideCommand.ts b/packages/cli/src/ui/commands/ideCommand.ts index 5631e7c2..55177d16 100644 --- a/packages/cli/src/ui/commands/ideCommand.ts +++ b/packages/cli/src/ui/commands/ideCommand.ts @@ -4,154 +4,158 @@ * SPDX-License-Identifier: Apache-2.0 */ -import { fileURLToPath } from 'url'; -import { Config, IDEConnectionStatus } from '@qwen-code/qwen-code-core'; +import { + Config, + DetectedIde, + IDEConnectionStatus, + getIdeDisplayName, + getIdeInstaller, +} from '@qwen-code/qwen-code-core'; import { CommandContext, SlashCommand, SlashCommandActionReturn, CommandKind, } from './types.js'; -import * as child_process from 'child_process'; -import * as process from 'process'; -import { glob } from 'glob'; -import * as path from 'path'; - -const VSCODE_COMMAND = process.platform === 'win32' ? 'code.cmd' : 'code'; -const VSCODE_COMPANION_EXTENSION_FOLDER = 'vscode-ide-companion'; - -function isVSCodeInstalled(): boolean { - try { - child_process.execSync( - process.platform === 'win32' - ? `where.exe ${VSCODE_COMMAND}` - : `command -v ${VSCODE_COMMAND}`, - { stdio: 'ignore' }, - ); - return true; - } catch { - return false; - } -} +import { SettingScope } from '../../config/settings.js'; export const ideCommand = (config: Config | null): SlashCommand | null => { - if (!config?.getIdeMode()) { + if (!config || !config.getIdeModeFeature()) { return null; } + const ideClient = config.getIdeClient(); + const currentIDE = ideClient.getCurrentIde(); + if (!currentIDE || !ideClient.getDetectedIdeDisplayName()) { + return { + name: 'ide', + description: 'manage IDE integration', + kind: CommandKind.BUILT_IN, + action: (): SlashCommandActionReturn => + ({ + type: 'message', + messageType: 'error', + content: `IDE integration is not supported in your current environment. To use this feature, run Gemini CLI in one of these supported IDEs: ${Object.values( + DetectedIde, + ) + .map((ide) => getIdeDisplayName(ide)) + .join(', ')}`, + }) as const, + }; + } - return { + const ideSlashCommand: SlashCommand = { name: 'ide', description: 'manage IDE integration', kind: CommandKind.BUILT_IN, - subCommands: [ - { - name: 'status', - description: 'check status of IDE integration', - kind: CommandKind.BUILT_IN, - action: (_context: CommandContext): SlashCommandActionReturn => { - const connection = config.getIdeClient()?.getConnectionStatus(); - switch (connection?.status) { - case IDEConnectionStatus.Connected: - return { - type: 'message', - messageType: 'info', - content: `🟢 Connected`, - } as const; - case IDEConnectionStatus.Connecting: - return { - type: 'message', - messageType: 'info', - content: `🟡 Connecting...`, - } as const; - default: { - let content = `🔴 Disconnected`; - if (connection?.details) { - content += `: ${connection.details}`; - } - return { - type: 'message', - messageType: 'error', - content, - } as const; - } - } - }, - }, - { - name: 'install', - description: 'install required VS Code companion extension', - kind: CommandKind.BUILT_IN, - action: async (context) => { - if (!isVSCodeInstalled()) { - context.ui.addItem( - { - type: 'error', - text: `VS Code command-line tool "${VSCODE_COMMAND}" not found in your PATH.`, - }, - Date.now(), - ); - return; - } - - const bundleDir = path.dirname(fileURLToPath(import.meta.url)); - // The VSIX file is copied to the bundle directory as part of the build. - let vsixFiles = glob.sync(path.join(bundleDir, '*.vsix')); - if (vsixFiles.length === 0) { - // If the VSIX file is not in the bundle, it might be a dev - // environment running with `npm start`. Look for it in the original - // package location, relative to the bundle dir. - const devPath = path.join( - bundleDir, - '..', - '..', - '..', - '..', - '..', - VSCODE_COMPANION_EXTENSION_FOLDER, - '*.vsix', - ); - vsixFiles = glob.sync(devPath); - } - if (vsixFiles.length === 0) { - context.ui.addItem( - { - type: 'error', - text: 'Could not find the required VS Code companion extension. Please file a bug via /bug.', - }, - Date.now(), - ); - return; - } - - const vsixPath = vsixFiles[0]; - const command = `${VSCODE_COMMAND} --install-extension ${vsixPath} --force`; - context.ui.addItem( - { - type: 'info', - text: `Installing VS Code companion extension...`, - }, - Date.now(), - ); - try { - child_process.execSync(command, { stdio: 'pipe' }); - context.ui.addItem( - { - type: 'info', - text: 'VS Code companion extension installed successfully. Restart gemini-cli in a fresh terminal window.', - }, - Date.now(), - ); - } catch (_error) { - context.ui.addItem( - { - type: 'error', - text: `Failed to install VS Code companion extension.`, - }, - Date.now(), - ); - } - }, - }, - ], + subCommands: [], }; + + const statusCommand: SlashCommand = { + name: 'status', + description: 'check status of IDE integration', + kind: CommandKind.BUILT_IN, + action: (_context: CommandContext): SlashCommandActionReturn => { + const connection = ideClient.getConnectionStatus(); + switch (connection.status) { + case IDEConnectionStatus.Connected: + return { + type: 'message', + messageType: 'info', + content: `🟢 Connected to ${ideClient.getDetectedIdeDisplayName()}`, + } as const; + case IDEConnectionStatus.Connecting: + return { + type: 'message', + messageType: 'info', + content: `🟡 Connecting...`, + } as const; + default: { + let content = `🔴 Disconnected`; + if (connection?.details) { + content += `: ${connection.details}`; + } + return { + type: 'message', + messageType: 'error', + content, + } as const; + } + } + }, + }; + + const installCommand: SlashCommand = { + name: 'install', + description: `install required IDE companion for ${ideClient.getDetectedIdeDisplayName()}`, + kind: CommandKind.BUILT_IN, + action: async (context) => { + const installer = getIdeInstaller(currentIDE); + if (!installer) { + context.ui.addItem( + { + type: 'error', + text: `No installer is available for ${ideClient.getDetectedIdeDisplayName()}. Please install the IDE companion manually from its marketplace.`, + }, + Date.now(), + ); + return; + } + + context.ui.addItem( + { + type: 'info', + text: `Installing IDE companion...`, + }, + Date.now(), + ); + + const result = await installer.install(); + context.ui.addItem( + { + type: result.success ? 'info' : 'error', + text: result.message, + }, + Date.now(), + ); + }, + }; + + const enableCommand: SlashCommand = { + name: 'enable', + description: 'enable IDE integration', + kind: CommandKind.BUILT_IN, + action: async (context: CommandContext) => { + context.services.settings.setValue(SettingScope.User, 'ideMode', true); + config.setIdeMode(true); + config.setIdeClientConnected(); + }, + }; + + const disableCommand: SlashCommand = { + name: 'disable', + description: 'disable IDE integration', + kind: CommandKind.BUILT_IN, + action: async (context: CommandContext) => { + context.services.settings.setValue(SettingScope.User, 'ideMode', false); + config.setIdeMode(false); + config.setIdeClientDisconnected(); + }, + }; + + const ideModeEnabled = config.getIdeMode(); + if (ideModeEnabled) { + ideSlashCommand.subCommands = [ + disableCommand, + statusCommand, + installCommand, + ]; + } else { + ideSlashCommand.subCommands = [ + enableCommand, + statusCommand, + installCommand, + ]; + } + + return ideSlashCommand; }; diff --git a/packages/cli/src/ui/commands/initCommand.test.ts b/packages/cli/src/ui/commands/initCommand.test.ts new file mode 100644 index 00000000..83cea944 --- /dev/null +++ b/packages/cli/src/ui/commands/initCommand.test.ts @@ -0,0 +1,102 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import { vi, describe, it, expect, beforeEach, afterEach } from 'vitest'; +import * as fs from 'fs'; +import * as path from 'path'; +import { initCommand } from './initCommand.js'; +import { createMockCommandContext } from '../../test-utils/mockCommandContext.js'; +import { type CommandContext } from './types.js'; + +// Mock the 'fs' module +vi.mock('fs', () => ({ + existsSync: vi.fn(), + writeFileSync: vi.fn(), +})); + +describe('initCommand', () => { + let mockContext: CommandContext; + const targetDir = '/test/dir'; + const geminiMdPath = path.join(targetDir, 'GEMINI.md'); + + beforeEach(() => { + // Create a fresh mock context for each test + mockContext = createMockCommandContext({ + services: { + config: { + getTargetDir: () => targetDir, + }, + }, + }); + }); + + afterEach(() => { + // Clear all mocks after each test + vi.clearAllMocks(); + }); + + it('should inform the user if GEMINI.md already exists', async () => { + // Arrange: Simulate that the file exists + vi.mocked(fs.existsSync).mockReturnValue(true); + + // Act: Run the command's action + const result = await initCommand.action!(mockContext, ''); + + // Assert: Check for the correct informational message + expect(result).toEqual({ + type: 'message', + messageType: 'info', + content: + 'A GEMINI.md file already exists in this directory. No changes were made.', + }); + // Assert: Ensure no file was written + expect(fs.writeFileSync).not.toHaveBeenCalled(); + }); + + it('should create GEMINI.md and submit a prompt if it does not exist', async () => { + // Arrange: Simulate that the file does not exist + vi.mocked(fs.existsSync).mockReturnValue(false); + + // Act: Run the command's action + const result = await initCommand.action!(mockContext, ''); + + // Assert: Check that writeFileSync was called correctly + expect(fs.writeFileSync).toHaveBeenCalledWith(geminiMdPath, '', 'utf8'); + + // Assert: Check that an informational message was added to the UI + expect(mockContext.ui.addItem).toHaveBeenCalledWith( + { + type: 'info', + text: 'Empty GEMINI.md created. Now analyzing the project to populate it.', + }, + expect.any(Number), + ); + + // Assert: Check that the correct prompt is submitted + expect(result.type).toBe('submit_prompt'); + expect(result.content).toContain( + 'You are an AI agent that brings the power of Gemini', + ); + }); + + it('should return an error if config is not available', async () => { + // Arrange: Create a context without config + const noConfigContext = createMockCommandContext(); + if (noConfigContext.services) { + noConfigContext.services.config = null; + } + + // Act: Run the command's action + const result = await initCommand.action!(noConfigContext, ''); + + // Assert: Check for the correct error message + expect(result).toEqual({ + type: 'message', + messageType: 'error', + content: 'Configuration not available.', + }); + }); +}); diff --git a/packages/cli/src/ui/commands/initCommand.ts b/packages/cli/src/ui/commands/initCommand.ts new file mode 100644 index 00000000..ad69d0da --- /dev/null +++ b/packages/cli/src/ui/commands/initCommand.ts @@ -0,0 +1,93 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import * as fs from 'fs'; +import * as path from 'path'; +import { + CommandContext, + SlashCommand, + SlashCommandActionReturn, + CommandKind, +} from './types.js'; + +export const initCommand: SlashCommand = { + name: 'init', + description: 'Analyzes the project and creates a tailored GEMINI.md file.', + kind: CommandKind.BUILT_IN, + action: async ( + context: CommandContext, + _args: string, + ): Promise => { + if (!context.services.config) { + return { + type: 'message', + messageType: 'error', + content: 'Configuration not available.', + }; + } + const targetDir = context.services.config.getTargetDir(); + const geminiMdPath = path.join(targetDir, 'GEMINI.md'); + + if (fs.existsSync(geminiMdPath)) { + return { + type: 'message', + messageType: 'info', + content: + 'A GEMINI.md file already exists in this directory. No changes were made.', + }; + } + + // Create an empty GEMINI.md file + fs.writeFileSync(geminiMdPath, '', 'utf8'); + + context.ui.addItem( + { + type: 'info', + text: 'Empty GEMINI.md created. Now analyzing the project to populate it.', + }, + Date.now(), + ); + + return { + type: 'submit_prompt', + content: ` +You are an AI agent that brings the power of Gemini directly into the terminal. Your task is to analyze the current directory and generate a comprehensive GEMINI.md file to be used as instructional context for future interactions. + +**Analysis Process:** + +1. **Initial Exploration:** + * Start by listing the files and directories to get a high-level overview of the structure. + * Read the README file (e.g., \`README.md\`, \`README.txt\`) if it exists. This is often the best place to start. + +2. **Iterative Deep Dive (up to 10 files):** + * Based on your initial findings, select a few files that seem most important (e.g., configuration files, main source files, documentation). + * Read them. As you learn more, refine your understanding and decide which files to read next. You don't need to decide all 10 files at once. Let your discoveries guide your exploration. + +3. **Identify Project Type:** + * **Code Project:** Look for clues like \`package.json\`, \`requirements.txt\`, \`pom.xml\`, \`go.mod\`, \`Cargo.toml\`, \`build.gradle\`, or a \`src\` directory. If you find them, this is likely a software project. + * **Non-Code Project:** If you don't find code-related files, this might be a directory for documentation, research papers, notes, or something else. + +**GEMINI.md Content Generation:** + +**For a Code Project:** + +* **Project Overview:** Write a clear and concise summary of the project's purpose, main technologies, and architecture. +* **Building and Running:** Document the key commands for building, running, and testing the project. Infer these from the files you've read (e.g., \`scripts\` in \`package.json\`, \`Makefile\`, etc.). If you can't find explicit commands, provide a placeholder with a TODO. +* **Development Conventions:** Describe any coding styles, testing practices, or contribution guidelines you can infer from the codebase. + +**For a Non-Code Project:** + +* **Directory Overview:** Describe the purpose and contents of the directory. What is it for? What kind of information does it hold? +* **Key Files:** List the most important files and briefly explain what they contain. +* **Usage:** Explain how the contents of this directory are intended to be used. + +**Final Output:** + +Write the complete content to the \`GEMINI.md\` file. The output must be well-formatted Markdown. +`, + }; + }, +}; diff --git a/packages/cli/src/ui/commands/mcpCommand.test.ts b/packages/cli/src/ui/commands/mcpCommand.test.ts index 53a23d84..8c7e3199 100644 --- a/packages/cli/src/ui/commands/mcpCommand.test.ts +++ b/packages/cli/src/ui/commands/mcpCommand.test.ts @@ -14,15 +14,10 @@ import { getMCPDiscoveryState, DiscoveredMCPTool, } from '@qwen-code/qwen-code-core'; -import open from 'open'; + import { MessageActionReturn } from './types.js'; import { Type, CallableTool } from '@google/genai'; -// Mock external dependencies -vi.mock('open', () => ({ - default: vi.fn(), -})); - vi.mock('@qwen-code/qwen-code-core', async (importOriginal) => { const actual = await importOriginal(); @@ -144,30 +139,15 @@ describe('mcpCommand', () => { mockConfig.getMcpServers = vi.fn().mockReturnValue({}); }); - it('should display a message with a URL when no MCP servers are configured in a sandbox', async () => { - process.env.SANDBOX = 'sandbox'; - + it('should display a message with a URL when no MCP servers are configured', async () => { const result = await mcpCommand.action!(mockContext, ''); expect(result).toEqual({ type: 'message', messageType: 'info', content: - 'No MCP servers configured. Please open the following URL in your browser to view documentation:\nhttps://goo.gle/gemini-cli-docs-mcp', + 'No MCP servers configured. Please view MCP documentation in your browser: https://goo.gle/gemini-cli-docs-mcp or use the cli /docs command', }); - expect(open).not.toHaveBeenCalled(); - }); - - it('should display a message and open a URL when no MCP servers are configured outside a sandbox', async () => { - const result = await mcpCommand.action!(mockContext, ''); - - expect(result).toEqual({ - type: 'message', - messageType: 'info', - content: - 'No MCP servers configured. Opening documentation in your browser: https://goo.gle/gemini-cli-docs-mcp', - }); - expect(open).toHaveBeenCalledWith('https://goo.gle/gemini-cli-docs-mcp'); }); }); @@ -232,9 +212,9 @@ describe('mcpCommand', () => { ); expect(message).toContain('server2_tool1'); - // Server 3 - Disconnected + // Server 3 - Disconnected but with cached tools, so shows as Ready expect(message).toContain( - '🔴 \u001b[1mserver3\u001b[0m - Disconnected (1 tools cached)', + '🟢 \u001b[1mserver3\u001b[0m - Ready (1 tool)', ); expect(message).toContain('server3_tool1'); diff --git a/packages/cli/src/ui/commands/mcpCommand.ts b/packages/cli/src/ui/commands/mcpCommand.ts index 2a3ba718..2660da7a 100644 --- a/packages/cli/src/ui/commands/mcpCommand.ts +++ b/packages/cli/src/ui/commands/mcpCommand.ts @@ -21,7 +21,6 @@ import { mcpServerRequiresOAuth, getErrorMessage, } from '@qwen-code/qwen-code-core'; -import open from 'open'; const COLOR_GREEN = '\u001b[32m'; const COLOR_YELLOW = '\u001b[33m'; @@ -60,21 +59,11 @@ const getMcpStatus = async ( if (serverNames.length === 0 && blockedMcpServers.length === 0) { const docsUrl = 'https://goo.gle/gemini-cli-docs-mcp'; - if (process.env.SANDBOX && process.env.SANDBOX !== 'sandbox-exec') { - return { - type: 'message', - messageType: 'info', - content: `No MCP servers configured. Please open the following URL in your browser to view documentation:\n${docsUrl}`, - }; - } else { - // Open the URL in the browser - await open(docsUrl); - return { - type: 'message', - messageType: 'info', - content: `No MCP servers configured. Opening documentation in your browser: ${docsUrl}`, - }; - } + return { + type: 'message', + messageType: 'info', + content: `No MCP servers configured. Please view MCP documentation in your browser: ${docsUrl} or use the cli /docs command`, + }; } // Check if any servers are still connecting @@ -105,7 +94,15 @@ const getMcpStatus = async ( const promptRegistry = await config.getPromptRegistry(); const serverPrompts = promptRegistry.getPromptsByServer(serverName) || []; - const status = getMCPServerStatus(serverName); + const originalStatus = getMCPServerStatus(serverName); + const hasCachedItems = serverTools.length > 0 || serverPrompts.length > 0; + + // If the server is "disconnected" but has prompts or cached tools, display it as Ready + // by using CONNECTED as the display status. + const status = + originalStatus === MCPServerStatus.DISCONNECTED && hasCachedItems + ? MCPServerStatus.CONNECTED + : originalStatus; // Add status indicator with descriptive text let statusIndicator = ''; @@ -271,11 +268,14 @@ const getMcpStatus = async ( message += ' No tools or prompts available\n'; } else if (serverTools.length === 0) { message += ' No tools available'; - if (status === MCPServerStatus.DISCONNECTED && needsAuthHint) { + if (originalStatus === MCPServerStatus.DISCONNECTED && needsAuthHint) { message += ` ${COLOR_GREY}(type: "/mcp auth ${serverName}" to authenticate this server)${RESET_COLOR}`; } message += '\n'; - } else if (status === MCPServerStatus.DISCONNECTED && needsAuthHint) { + } else if ( + originalStatus === MCPServerStatus.DISCONNECTED && + needsAuthHint + ) { // This case is for when serverTools.length > 0 message += ` ${COLOR_GREY}(type: "/mcp auth ${serverName}" to authenticate this server)${RESET_COLOR}\n`; } diff --git a/packages/cli/src/ui/commands/memoryCommand.ts b/packages/cli/src/ui/commands/memoryCommand.ts index e8f1224a..fd557e09 100644 --- a/packages/cli/src/ui/commands/memoryCommand.ts +++ b/packages/cli/src/ui/commands/memoryCommand.ts @@ -92,6 +92,7 @@ export const memoryCommand: SlashCommand = { config.getDebugMode(), config.getFileService(), config.getExtensionContextFilePaths(), + context.services.settings.merged.memoryImportFormat || 'tree', // Use setting or default to 'tree' config.getFileFilteringOptions(), context.services.settings.merged.memoryDiscoveryMaxDirs, ); diff --git a/packages/cli/src/ui/commands/setupGithubCommand.test.ts b/packages/cli/src/ui/commands/setupGithubCommand.test.ts new file mode 100644 index 00000000..7c654149 --- /dev/null +++ b/packages/cli/src/ui/commands/setupGithubCommand.test.ts @@ -0,0 +1,66 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import { vi, describe, expect, it, afterEach, beforeEach } from 'vitest'; +import * as child_process from 'child_process'; +import { setupGithubCommand } from './setupGithubCommand.js'; +import { CommandContext, ToolActionReturn } from './types.js'; + +vi.mock('child_process'); + +describe('setupGithubCommand', () => { + beforeEach(() => { + vi.resetAllMocks(); + }); + + afterEach(() => { + vi.restoreAllMocks(); + }); + + it('returns a tool action to download github workflows and handles paths', () => { + const fakeRepoRoot = '/github.com/fake/repo/root'; + vi.mocked(child_process.execSync).mockReturnValue(fakeRepoRoot); + + const result = setupGithubCommand.action?.( + {} as CommandContext, + '', + ) as ToolActionReturn; + + expect(result.type).toBe('tool'); + expect(result.toolName).toBe('run_shell_command'); + expect(child_process.execSync).toHaveBeenCalledWith( + 'git rev-parse --show-toplevel', + { + encoding: 'utf-8', + }, + ); + expect(child_process.execSync).toHaveBeenCalledWith('git remote -v', { + encoding: 'utf-8', + }); + + const { command } = result.toolArgs; + + const expectedSubstrings = [ + `mkdir -p "${fakeRepoRoot}/.github/workflows"`, + `curl -fsSL -o "${fakeRepoRoot}/.github/workflows/gemini-cli.yml"`, + `curl -fsSL -o "${fakeRepoRoot}/.github/workflows/gemini-issue-automated-triage.yml"`, + `curl -fsSL -o "${fakeRepoRoot}/.github/workflows/gemini-issue-scheduled-triage.yml"`, + `curl -fsSL -o "${fakeRepoRoot}/.github/workflows/gemini-pr-review.yml"`, + 'https://raw.githubusercontent.com/google-github-actions/run-gemini-cli/refs/heads/v0/examples/workflows/', + ]; + + for (const substring of expectedSubstrings) { + expect(command).toContain(substring); + } + }); + + it('throws an error if git root cannot be determined', () => { + vi.mocked(child_process.execSync).mockReturnValue(''); + expect(() => { + setupGithubCommand.action?.({} as CommandContext, ''); + }).toThrow('Unable to determine the Git root directory.'); + }); +}); diff --git a/packages/cli/src/ui/commands/setupGithubCommand.ts b/packages/cli/src/ui/commands/setupGithubCommand.ts new file mode 100644 index 00000000..9dd12292 --- /dev/null +++ b/packages/cli/src/ui/commands/setupGithubCommand.ts @@ -0,0 +1,59 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import path from 'path'; +import { execSync } from 'child_process'; +import { isGitHubRepository } from '../../utils/gitUtils.js'; + +import { + CommandKind, + SlashCommand, + SlashCommandActionReturn, +} from './types.js'; + +export const setupGithubCommand: SlashCommand = { + name: 'setup-github', + description: 'Set up GitHub Actions', + kind: CommandKind.BUILT_IN, + action: (): SlashCommandActionReturn => { + const gitRootRepo = execSync('git rev-parse --show-toplevel', { + encoding: 'utf-8', + }).trim(); + + if (!isGitHubRepository()) { + throw new Error('Unable to determine the Git root directory.'); + } + + const version = 'v0'; + const workflowBaseUrl = `https://raw.githubusercontent.com/google-github-actions/run-gemini-cli/refs/heads/${version}/examples/workflows/`; + + const workflows = [ + 'gemini-cli/gemini-cli.yml', + 'issue-triage/gemini-issue-automated-triage.yml', + 'issue-triage/gemini-issue-scheduled-triage.yml', + 'pr-review/gemini-pr-review.yml', + ]; + + const command = [ + 'set -e', + `mkdir -p "${gitRootRepo}/.github/workflows"`, + ...workflows.map((workflow) => { + const fileName = path.basename(workflow); + return `curl -fsSL -o "${gitRootRepo}/.github/workflows/${fileName}" "${workflowBaseUrl}/${workflow}"`; + }), + 'echo "Workflows downloaded successfully."', + ].join(' && '); + return { + type: 'tool', + toolName: 'run_shell_command', + toolArgs: { + description: + 'Setting up GitHub Actions to triage issues and review PRs with Gemini.', + command, + }, + }; + }, +}; diff --git a/packages/cli/src/ui/commands/types.ts b/packages/cli/src/ui/commands/types.ts index 21ab7828..b546c637 100644 --- a/packages/cli/src/ui/commands/types.ts +++ b/packages/cli/src/ui/commands/types.ts @@ -99,7 +99,7 @@ export interface MessageActionReturn { */ export interface OpenDialogActionReturn { type: 'dialog'; - dialog: 'help' | 'auth' | 'theme' | 'editor' | 'privacy'; + dialog: 'auth' | 'theme' | 'editor' | 'privacy'; } /** @@ -158,6 +158,9 @@ export interface SlashCommand { kind: CommandKind; + // Optional metadata for extension commands + extensionName?: string; + // The action to run. Optional for parent commands that only group sub-commands. action?: ( context: CommandContext, diff --git a/packages/cli/src/ui/components/ContextSummaryDisplay.tsx b/packages/cli/src/ui/components/ContextSummaryDisplay.tsx index ef281f5f..bbc564fc 100644 --- a/packages/cli/src/ui/components/ContextSummaryDisplay.tsx +++ b/packages/cli/src/ui/components/ContextSummaryDisplay.tsx @@ -8,7 +8,7 @@ import React from 'react'; import { Text } from 'ink'; import { Colors } from '../colors.js'; import { - type OpenFiles, + type IdeContext, type MCPServerConfig, } from '@qwen-code/qwen-code-core'; @@ -18,7 +18,7 @@ interface ContextSummaryDisplayProps { mcpServers?: Record; blockedMcpServers?: Array<{ name: string; extensionName: string }>; showToolDescriptions?: boolean; - openFiles?: OpenFiles; + ideContext?: IdeContext; } export const ContextSummaryDisplay: React.FC = ({ @@ -27,26 +27,28 @@ export const ContextSummaryDisplay: React.FC = ({ mcpServers, blockedMcpServers, showToolDescriptions, - openFiles, + ideContext, }) => { const mcpServerCount = Object.keys(mcpServers || {}).length; const blockedMcpServerCount = blockedMcpServers?.length || 0; + const openFileCount = ideContext?.workspaceState?.openFiles?.length ?? 0; if ( geminiMdFileCount === 0 && mcpServerCount === 0 && blockedMcpServerCount === 0 && - (openFiles?.recentOpenFiles?.length ?? 0) === 0 + openFileCount === 0 ) { return ; // Render an empty space to reserve height } - const recentFilesText = (() => { - const count = openFiles?.recentOpenFiles?.length ?? 0; - if (count === 0) { + const openFilesText = (() => { + if (openFileCount === 0) { return ''; } - return `${count} recent file${count > 1 ? 's' : ''} (ctrl+e to view)`; + return `${openFileCount} open file${ + openFileCount > 1 ? 's' : '' + } (ctrl+e to view)`; })(); const geminiMdText = (() => { @@ -84,8 +86,8 @@ export const ContextSummaryDisplay: React.FC = ({ let summaryText = 'Using: '; const summaryParts = []; - if (recentFilesText) { - summaryParts.push(recentFilesText); + if (openFilesText) { + summaryParts.push(openFilesText); } if (geminiMdText) { summaryParts.push(geminiMdText); diff --git a/packages/cli/src/ui/components/DebugProfiler.tsx b/packages/cli/src/ui/components/DebugProfiler.tsx new file mode 100644 index 00000000..89c40a91 --- /dev/null +++ b/packages/cli/src/ui/components/DebugProfiler.tsx @@ -0,0 +1,32 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import { Text, useInput } from 'ink'; +import { useEffect, useRef, useState } from 'react'; +import { Colors } from '../colors.js'; + +export const DebugProfiler = () => { + const numRenders = useRef(0); + const [showNumRenders, setShowNumRenders] = useState(false); + + useEffect(() => { + numRenders.current++; + }); + + useInput((input, key) => { + if (key.ctrl && input === 'b') { + setShowNumRenders((prev) => !prev); + } + }); + + if (!showNumRenders) { + return null; + } + + return ( + Renders: {numRenders.current} + ); +}; diff --git a/packages/cli/src/ui/components/Footer.tsx b/packages/cli/src/ui/components/Footer.tsx index af2fb9b3..93a57f5f 100644 --- a/packages/cli/src/ui/components/Footer.tsx +++ b/packages/cli/src/ui/components/Footer.tsx @@ -17,6 +17,8 @@ import process from 'node:process'; import Gradient from 'ink-gradient'; import { MemoryUsageDisplay } from './MemoryUsageDisplay.js'; +import { DebugProfiler } from './DebugProfiler.js'; + interface FooterProps { model: string; targetDir: string; @@ -52,6 +54,7 @@ export const Footer: React.FC = ({ return ( + {debugMode && } {vimMode && [{vimMode}] } {nightly ? ( diff --git a/packages/cli/src/ui/components/Help.tsx b/packages/cli/src/ui/components/Help.tsx index ecad9b5e..d9f7b4a8 100644 --- a/packages/cli/src/ui/components/Help.tsx +++ b/packages/cli/src/ui/components/Help.tsx @@ -103,9 +103,15 @@ export const Help: React.FC = ({ commands }) => ( - Enter + Alt+Left/Right {' '} - - Send message + - Jump through words in the input + + + + Ctrl+C + {' '} + - Quit application @@ -117,21 +123,15 @@ export const Help: React.FC = ({ commands }) => ( - Up/Down + Ctrl+L {' '} - - Cycle through your prompt history + - Clear the screen - Alt+Left/Right + {process.platform === 'darwin' ? 'Ctrl+X / Meta+Enter' : 'Ctrl+X'} {' '} - - Jump through words in the input - - - - Shift+Tab - {' '} - - Toggle auto-accepting edits + - Open input in external editor @@ -139,6 +139,12 @@ export const Help: React.FC = ({ commands }) => ( {' '} - Toggle YOLO mode + + + Enter + {' '} + - Send message + Esc @@ -147,9 +153,22 @@ export const Help: React.FC = ({ commands }) => ( - Ctrl+C + Shift+Tab {' '} - - Quit application + - Toggle auto-accepting edits + + + + Up/Down + {' '} + - Cycle through your prompt history + + + + For a full list of shortcuts, see{' '} + + docs/keyboard-shortcuts.md + ); diff --git a/packages/cli/src/ui/components/HistoryItemDisplay.test.tsx b/packages/cli/src/ui/components/HistoryItemDisplay.test.tsx index 6edcc649..eb8e3d0d 100644 --- a/packages/cli/src/ui/components/HistoryItemDisplay.test.tsx +++ b/packages/cli/src/ui/components/HistoryItemDisplay.test.tsx @@ -35,6 +35,18 @@ describe('', () => { expect(lastFrame()).toContain('Hello'); }); + it('renders UserMessage for "user" type with slash command', () => { + const item: HistoryItem = { + ...baseItem, + type: MessageType.USER, + text: '/theme', + }; + const { lastFrame } = render( + , + ); + expect(lastFrame()).toContain('/theme'); + }); + it('renders StatsDisplay for "stats" type', () => { const item: HistoryItem = { ...baseItem, diff --git a/packages/cli/src/ui/components/HistoryItemDisplay.tsx b/packages/cli/src/ui/components/HistoryItemDisplay.tsx index 00182334..91b18a7e 100644 --- a/packages/cli/src/ui/components/HistoryItemDisplay.tsx +++ b/packages/cli/src/ui/components/HistoryItemDisplay.tsx @@ -21,6 +21,8 @@ import { ModelStatsDisplay } from './ModelStatsDisplay.js'; import { ToolStatsDisplay } from './ToolStatsDisplay.js'; import { SessionSummaryDisplay } from './SessionSummaryDisplay.js'; import { Config } from '@qwen-code/qwen-code-core'; +import { Help } from './Help.js'; +import { SlashCommand } from '../commands/types.js'; interface HistoryItemDisplayProps { item: HistoryItem; @@ -29,6 +31,7 @@ interface HistoryItemDisplayProps { isPending: boolean; config?: Config; isFocused?: boolean; + commands?: readonly SlashCommand[]; } export const HistoryItemDisplay: React.FC = ({ @@ -37,6 +40,7 @@ export const HistoryItemDisplay: React.FC = ({ terminalWidth, isPending, config, + commands, isFocused = true, }) => ( @@ -71,6 +75,7 @@ export const HistoryItemDisplay: React.FC = ({ gcpProject={item.gcpProject} /> )} + {item.type === 'help' && commands && } {item.type === 'stats' && } {item.type === 'model_stats' && } {item.type === 'tool_stats' && } diff --git a/packages/cli/src/ui/components/IDEContextDetailDisplay.tsx b/packages/cli/src/ui/components/IDEContextDetailDisplay.tsx index 0568ac91..bebb1b9b 100644 --- a/packages/cli/src/ui/components/IDEContextDetailDisplay.tsx +++ b/packages/cli/src/ui/components/IDEContextDetailDisplay.tsx @@ -4,26 +4,24 @@ * SPDX-License-Identifier: Apache-2.0 */ +import { type File, type IdeContext } from '@qwen-code/qwen-code-core'; import { Box, Text } from 'ink'; -import { type OpenFiles } from '@qwen-code/qwen-code-core'; -import { Colors } from '../colors.js'; import path from 'node:path'; +import { Colors } from '../colors.js'; interface IDEContextDetailDisplayProps { - openFiles: OpenFiles | undefined; + ideContext: IdeContext | undefined; + detectedIdeDisplay: string | undefined; } export function IDEContextDetailDisplay({ - openFiles, + ideContext, + detectedIdeDisplay, }: IDEContextDetailDisplayProps) { - if ( - !openFiles || - !openFiles.recentOpenFiles || - openFiles.recentOpenFiles.length === 0 - ) { + const openFiles = ideContext?.workspaceState?.openFiles; + if (!openFiles || openFiles.length === 0) { return null; } - const recentFiles = openFiles.recentOpenFiles || []; return ( - IDE Context (ctrl+e to toggle) + {detectedIdeDisplay ? detectedIdeDisplay : 'IDE'} Context (ctrl+e to + toggle) - {recentFiles.length > 0 && ( + {openFiles.length > 0 && ( - Recent files: - {recentFiles.map((file) => ( - - - {path.basename(file.filePath)} - {file.filePath === openFiles.activeFile ? ' (active)' : ''} + Open files: + {openFiles.map((file: File) => ( + + - {path.basename(file.path)} + {file.isActive ? ' (active)' : ''} ))} diff --git a/packages/cli/src/ui/components/InputPrompt.test.tsx b/packages/cli/src/ui/components/InputPrompt.test.tsx index b5246dc4..de088378 100644 --- a/packages/cli/src/ui/components/InputPrompt.test.tsx +++ b/packages/cli/src/ui/components/InputPrompt.test.tsx @@ -19,7 +19,10 @@ import { useShellHistory, UseShellHistoryReturn, } from '../hooks/useShellHistory.js'; -import { useCompletion, UseCompletionReturn } from '../hooks/useCompletion.js'; +import { + useCommandCompletion, + UseCommandCompletionReturn, +} from '../hooks/useCommandCompletion.js'; import { useInputHistory, UseInputHistoryReturn, @@ -28,7 +31,7 @@ import * as clipboardUtils from '../utils/clipboardUtils.js'; import { createMockCommandContext } from '../../test-utils/mockCommandContext.js'; vi.mock('../hooks/useShellHistory.js'); -vi.mock('../hooks/useCompletion.js'); +vi.mock('../hooks/useCommandCompletion.js'); vi.mock('../hooks/useInputHistory.js'); vi.mock('../utils/clipboardUtils.js'); @@ -83,13 +86,13 @@ const mockSlashCommands: SlashCommand[] = [ describe('InputPrompt', () => { let props: InputPromptProps; let mockShellHistory: UseShellHistoryReturn; - let mockCompletion: UseCompletionReturn; + let mockCommandCompletion: UseCommandCompletionReturn; let mockInputHistory: UseInputHistoryReturn; let mockBuffer: TextBuffer; let mockCommandContext: CommandContext; const mockedUseShellHistory = vi.mocked(useShellHistory); - const mockedUseCompletion = vi.mocked(useCompletion); + const mockedUseCommandCompletion = vi.mocked(useCommandCompletion); const mockedUseInputHistory = vi.mocked(useInputHistory); beforeEach(() => { @@ -115,7 +118,9 @@ describe('InputPrompt', () => { visualScrollRow: 0, handleInput: vi.fn(), move: vi.fn(), - moveToOffset: vi.fn(), + moveToOffset: (offset: number) => { + mockBuffer.cursor = [0, offset]; + }, killLineRight: vi.fn(), killLineLeft: vi.fn(), openInExternalEditor: vi.fn(), @@ -133,6 +138,7 @@ describe('InputPrompt', () => { } as unknown as TextBuffer; mockShellHistory = { + history: [], addCommandToHistory: vi.fn(), getPreviousCommand: vi.fn().mockReturnValue(null), getNextCommand: vi.fn().mockReturnValue(null), @@ -140,7 +146,7 @@ describe('InputPrompt', () => { }; mockedUseShellHistory.mockReturnValue(mockShellHistory); - mockCompletion = { + mockCommandCompletion = { suggestions: [], activeSuggestionIndex: -1, isLoadingSuggestions: false, @@ -154,7 +160,7 @@ describe('InputPrompt', () => { setShowSuggestions: vi.fn(), handleAutocomplete: vi.fn(), }; - mockedUseCompletion.mockReturnValue(mockCompletion); + mockedUseCommandCompletion.mockReturnValue(mockCommandCompletion); mockInputHistory = { navigateUp: vi.fn(), @@ -172,6 +178,9 @@ describe('InputPrompt', () => { getProjectRoot: () => path.join('test', 'project'), getTargetDir: () => path.join('test', 'project', 'src'), getVimMode: () => false, + getWorkspaceContext: () => ({ + getDirectories: () => ['/test/project/src'], + }), } as unknown as Config, slashCommands: mockSlashCommands, commandContext: mockCommandContext, @@ -262,8 +271,8 @@ describe('InputPrompt', () => { }); it('should call completion.navigateUp for both up arrow and Ctrl+P when suggestions are showing', async () => { - mockedUseCompletion.mockReturnValue({ - ...mockCompletion, + mockedUseCommandCompletion.mockReturnValue({ + ...mockCommandCompletion, showSuggestions: true, suggestions: [ { label: 'memory', value: 'memory' }, @@ -282,15 +291,15 @@ describe('InputPrompt', () => { stdin.write('\u0010'); // Ctrl+P await wait(); - expect(mockCompletion.navigateUp).toHaveBeenCalledTimes(2); - expect(mockCompletion.navigateDown).not.toHaveBeenCalled(); + expect(mockCommandCompletion.navigateUp).toHaveBeenCalledTimes(2); + expect(mockCommandCompletion.navigateDown).not.toHaveBeenCalled(); unmount(); }); it('should call completion.navigateDown for both down arrow and Ctrl+N when suggestions are showing', async () => { - mockedUseCompletion.mockReturnValue({ - ...mockCompletion, + mockedUseCommandCompletion.mockReturnValue({ + ...mockCommandCompletion, showSuggestions: true, suggestions: [ { label: 'memory', value: 'memory' }, @@ -308,15 +317,15 @@ describe('InputPrompt', () => { stdin.write('\u000E'); // Ctrl+N await wait(); - expect(mockCompletion.navigateDown).toHaveBeenCalledTimes(2); - expect(mockCompletion.navigateUp).not.toHaveBeenCalled(); + expect(mockCommandCompletion.navigateDown).toHaveBeenCalledTimes(2); + expect(mockCommandCompletion.navigateUp).not.toHaveBeenCalled(); unmount(); }); it('should NOT call completion navigation when suggestions are not showing', async () => { - mockedUseCompletion.mockReturnValue({ - ...mockCompletion, + mockedUseCommandCompletion.mockReturnValue({ + ...mockCommandCompletion, showSuggestions: false, }); props.buffer.setText('some text'); @@ -333,8 +342,8 @@ describe('InputPrompt', () => { stdin.write('\u000E'); // Ctrl+N await wait(); - expect(mockCompletion.navigateUp).not.toHaveBeenCalled(); - expect(mockCompletion.navigateDown).not.toHaveBeenCalled(); + expect(mockCommandCompletion.navigateUp).not.toHaveBeenCalled(); + expect(mockCommandCompletion.navigateDown).not.toHaveBeenCalled(); unmount(); }); @@ -463,8 +472,8 @@ describe('InputPrompt', () => { it('should complete a partial parent command', async () => { // SCENARIO: /mem -> Tab - mockedUseCompletion.mockReturnValue({ - ...mockCompletion, + mockedUseCommandCompletion.mockReturnValue({ + ...mockCommandCompletion, showSuggestions: true, suggestions: [{ label: 'memory', value: 'memory', description: '...' }], activeSuggestionIndex: 0, @@ -477,14 +486,14 @@ describe('InputPrompt', () => { stdin.write('\t'); // Press Tab await wait(); - expect(mockCompletion.handleAutocomplete).toHaveBeenCalledWith(0); + expect(mockCommandCompletion.handleAutocomplete).toHaveBeenCalledWith(0); unmount(); }); it('should append a sub-command when the parent command is already complete', async () => { // SCENARIO: /memory -> Tab (to accept 'add') - mockedUseCompletion.mockReturnValue({ - ...mockCompletion, + mockedUseCommandCompletion.mockReturnValue({ + ...mockCommandCompletion, showSuggestions: true, suggestions: [ { label: 'show', value: 'show' }, @@ -500,14 +509,14 @@ describe('InputPrompt', () => { stdin.write('\t'); // Press Tab await wait(); - expect(mockCompletion.handleAutocomplete).toHaveBeenCalledWith(1); + expect(mockCommandCompletion.handleAutocomplete).toHaveBeenCalledWith(1); unmount(); }); it('should handle the "backspace" edge case correctly', async () => { // SCENARIO: /memory -> Backspace -> /memory -> Tab (to accept 'show') - mockedUseCompletion.mockReturnValue({ - ...mockCompletion, + mockedUseCommandCompletion.mockReturnValue({ + ...mockCommandCompletion, showSuggestions: true, suggestions: [ { label: 'show', value: 'show' }, @@ -525,14 +534,14 @@ describe('InputPrompt', () => { await wait(); // It should NOT become '/show'. It should correctly become '/memory show'. - expect(mockCompletion.handleAutocomplete).toHaveBeenCalledWith(0); + expect(mockCommandCompletion.handleAutocomplete).toHaveBeenCalledWith(0); unmount(); }); it('should complete a partial argument for a command', async () => { // SCENARIO: /chat resume fi- -> Tab - mockedUseCompletion.mockReturnValue({ - ...mockCompletion, + mockedUseCommandCompletion.mockReturnValue({ + ...mockCommandCompletion, showSuggestions: true, suggestions: [{ label: 'fix-foo', value: 'fix-foo' }], activeSuggestionIndex: 0, @@ -545,13 +554,13 @@ describe('InputPrompt', () => { stdin.write('\t'); // Press Tab await wait(); - expect(mockCompletion.handleAutocomplete).toHaveBeenCalledWith(0); + expect(mockCommandCompletion.handleAutocomplete).toHaveBeenCalledWith(0); unmount(); }); it('should autocomplete on Enter when suggestions are active, without submitting', async () => { - mockedUseCompletion.mockReturnValue({ - ...mockCompletion, + mockedUseCommandCompletion.mockReturnValue({ + ...mockCommandCompletion, showSuggestions: true, suggestions: [{ label: 'memory', value: 'memory' }], activeSuggestionIndex: 0, @@ -565,7 +574,7 @@ describe('InputPrompt', () => { await wait(); // The app should autocomplete the text, NOT submit. - expect(mockCompletion.handleAutocomplete).toHaveBeenCalledWith(0); + expect(mockCommandCompletion.handleAutocomplete).toHaveBeenCalledWith(0); expect(props.onSubmit).not.toHaveBeenCalled(); unmount(); @@ -581,8 +590,8 @@ describe('InputPrompt', () => { }, ]; - mockedUseCompletion.mockReturnValue({ - ...mockCompletion, + mockedUseCommandCompletion.mockReturnValue({ + ...mockCommandCompletion, showSuggestions: true, suggestions: [{ label: 'help', value: 'help' }], activeSuggestionIndex: 0, @@ -595,7 +604,7 @@ describe('InputPrompt', () => { stdin.write('\t'); // Press Tab for autocomplete await wait(); - expect(mockCompletion.handleAutocomplete).toHaveBeenCalledWith(0); + expect(mockCommandCompletion.handleAutocomplete).toHaveBeenCalledWith(0); unmount(); }); @@ -613,8 +622,8 @@ describe('InputPrompt', () => { }); it('should submit directly on Enter when isPerfectMatch is true', async () => { - mockedUseCompletion.mockReturnValue({ - ...mockCompletion, + mockedUseCommandCompletion.mockReturnValue({ + ...mockCommandCompletion, showSuggestions: false, isPerfectMatch: true, }); @@ -631,8 +640,8 @@ describe('InputPrompt', () => { }); it('should submit directly on Enter when a complete leaf command is typed', async () => { - mockedUseCompletion.mockReturnValue({ - ...mockCompletion, + mockedUseCommandCompletion.mockReturnValue({ + ...mockCommandCompletion, showSuggestions: false, isPerfectMatch: false, // Added explicit isPerfectMatch false }); @@ -649,8 +658,8 @@ describe('InputPrompt', () => { }); it('should autocomplete an @-path on Enter without submitting', async () => { - mockedUseCompletion.mockReturnValue({ - ...mockCompletion, + mockedUseCommandCompletion.mockReturnValue({ + ...mockCommandCompletion, showSuggestions: true, suggestions: [{ label: 'index.ts', value: 'index.ts' }], activeSuggestionIndex: 0, @@ -663,7 +672,7 @@ describe('InputPrompt', () => { stdin.write('\r'); await wait(); - expect(mockCompletion.handleAutocomplete).toHaveBeenCalledWith(0); + expect(mockCommandCompletion.handleAutocomplete).toHaveBeenCalledWith(0); expect(props.onSubmit).not.toHaveBeenCalled(); unmount(); }); @@ -695,7 +704,7 @@ describe('InputPrompt', () => { await wait(); expect(props.buffer.setText).toHaveBeenCalledWith(''); - expect(mockCompletion.resetCompletionState).toHaveBeenCalled(); + expect(mockCommandCompletion.resetCompletionState).toHaveBeenCalled(); expect(props.onSubmit).not.toHaveBeenCalled(); unmount(); }); @@ -719,8 +728,8 @@ describe('InputPrompt', () => { mockBuffer.lines = ['@src/components']; mockBuffer.cursor = [0, 15]; - mockedUseCompletion.mockReturnValue({ - ...mockCompletion, + mockedUseCommandCompletion.mockReturnValue({ + ...mockCommandCompletion, showSuggestions: true, suggestions: [{ label: 'Button.tsx', value: 'Button.tsx' }], }); @@ -729,11 +738,13 @@ describe('InputPrompt', () => { await wait(); // Verify useCompletion was called with correct signature - expect(mockedUseCompletion).toHaveBeenCalledWith( + expect(mockedUseCommandCompletion).toHaveBeenCalledWith( mockBuffer, + ['/test/project/src'], path.join('test', 'project', 'src'), mockSlashCommands, mockCommandContext, + false, expect.any(Object), ); @@ -745,8 +756,8 @@ describe('InputPrompt', () => { mockBuffer.lines = ['/memory']; mockBuffer.cursor = [0, 7]; - mockedUseCompletion.mockReturnValue({ - ...mockCompletion, + mockedUseCommandCompletion.mockReturnValue({ + ...mockCommandCompletion, showSuggestions: true, suggestions: [{ label: 'show', value: 'show' }], }); @@ -754,11 +765,13 @@ describe('InputPrompt', () => { const { unmount } = render(); await wait(); - expect(mockedUseCompletion).toHaveBeenCalledWith( + expect(mockedUseCommandCompletion).toHaveBeenCalledWith( mockBuffer, + ['/test/project/src'], path.join('test', 'project', 'src'), mockSlashCommands, mockCommandContext, + false, expect.any(Object), ); @@ -770,8 +783,8 @@ describe('InputPrompt', () => { mockBuffer.lines = ['@src/file.ts hello']; mockBuffer.cursor = [0, 18]; - mockedUseCompletion.mockReturnValue({ - ...mockCompletion, + mockedUseCommandCompletion.mockReturnValue({ + ...mockCommandCompletion, showSuggestions: false, suggestions: [], }); @@ -779,11 +792,13 @@ describe('InputPrompt', () => { const { unmount } = render(); await wait(); - expect(mockedUseCompletion).toHaveBeenCalledWith( + expect(mockedUseCommandCompletion).toHaveBeenCalledWith( mockBuffer, + ['/test/project/src'], path.join('test', 'project', 'src'), mockSlashCommands, mockCommandContext, + false, expect.any(Object), ); @@ -795,8 +810,8 @@ describe('InputPrompt', () => { mockBuffer.lines = ['/memory add']; mockBuffer.cursor = [0, 11]; - mockedUseCompletion.mockReturnValue({ - ...mockCompletion, + mockedUseCommandCompletion.mockReturnValue({ + ...mockCommandCompletion, showSuggestions: false, suggestions: [], }); @@ -804,11 +819,13 @@ describe('InputPrompt', () => { const { unmount } = render(); await wait(); - expect(mockedUseCompletion).toHaveBeenCalledWith( + expect(mockedUseCommandCompletion).toHaveBeenCalledWith( mockBuffer, + ['/test/project/src'], path.join('test', 'project', 'src'), mockSlashCommands, mockCommandContext, + false, expect.any(Object), ); @@ -820,8 +837,8 @@ describe('InputPrompt', () => { mockBuffer.lines = ['hello world']; mockBuffer.cursor = [0, 5]; - mockedUseCompletion.mockReturnValue({ - ...mockCompletion, + mockedUseCommandCompletion.mockReturnValue({ + ...mockCommandCompletion, showSuggestions: false, suggestions: [], }); @@ -829,11 +846,13 @@ describe('InputPrompt', () => { const { unmount } = render(); await wait(); - expect(mockedUseCompletion).toHaveBeenCalledWith( + expect(mockedUseCommandCompletion).toHaveBeenCalledWith( mockBuffer, + ['/test/project/src'], path.join('test', 'project', 'src'), mockSlashCommands, mockCommandContext, + false, expect.any(Object), ); @@ -845,8 +864,8 @@ describe('InputPrompt', () => { mockBuffer.lines = ['first line', '/memory']; mockBuffer.cursor = [1, 7]; - mockedUseCompletion.mockReturnValue({ - ...mockCompletion, + mockedUseCommandCompletion.mockReturnValue({ + ...mockCommandCompletion, showSuggestions: false, suggestions: [], }); @@ -855,11 +874,13 @@ describe('InputPrompt', () => { await wait(); // Verify useCompletion was called with the buffer - expect(mockedUseCompletion).toHaveBeenCalledWith( + expect(mockedUseCommandCompletion).toHaveBeenCalledWith( mockBuffer, + ['/test/project/src'], path.join('test', 'project', 'src'), mockSlashCommands, mockCommandContext, + false, expect.any(Object), ); @@ -871,8 +892,8 @@ describe('InputPrompt', () => { mockBuffer.lines = ['/memory']; mockBuffer.cursor = [0, 7]; - mockedUseCompletion.mockReturnValue({ - ...mockCompletion, + mockedUseCommandCompletion.mockReturnValue({ + ...mockCommandCompletion, showSuggestions: true, suggestions: [{ label: 'show', value: 'show' }], }); @@ -880,11 +901,13 @@ describe('InputPrompt', () => { const { unmount } = render(); await wait(); - expect(mockedUseCompletion).toHaveBeenCalledWith( + expect(mockedUseCommandCompletion).toHaveBeenCalledWith( mockBuffer, + ['/test/project/src'], path.join('test', 'project', 'src'), mockSlashCommands, mockCommandContext, + false, expect.any(Object), ); @@ -897,8 +920,8 @@ describe('InputPrompt', () => { mockBuffer.lines = ['@src/file👍.txt']; mockBuffer.cursor = [0, 14]; // After the emoji character - mockedUseCompletion.mockReturnValue({ - ...mockCompletion, + mockedUseCommandCompletion.mockReturnValue({ + ...mockCommandCompletion, showSuggestions: true, suggestions: [{ label: 'file👍.txt', value: 'file👍.txt' }], }); @@ -906,11 +929,13 @@ describe('InputPrompt', () => { const { unmount } = render(); await wait(); - expect(mockedUseCompletion).toHaveBeenCalledWith( + expect(mockedUseCommandCompletion).toHaveBeenCalledWith( mockBuffer, + ['/test/project/src'], path.join('test', 'project', 'src'), mockSlashCommands, mockCommandContext, + false, expect.any(Object), ); @@ -923,8 +948,8 @@ describe('InputPrompt', () => { mockBuffer.lines = ['@src/file👍.txt hello']; mockBuffer.cursor = [0, 20]; // After the space - mockedUseCompletion.mockReturnValue({ - ...mockCompletion, + mockedUseCommandCompletion.mockReturnValue({ + ...mockCommandCompletion, showSuggestions: false, suggestions: [], }); @@ -932,11 +957,13 @@ describe('InputPrompt', () => { const { unmount } = render(); await wait(); - expect(mockedUseCompletion).toHaveBeenCalledWith( + expect(mockedUseCommandCompletion).toHaveBeenCalledWith( mockBuffer, + ['/test/project/src'], path.join('test', 'project', 'src'), mockSlashCommands, mockCommandContext, + false, expect.any(Object), ); @@ -949,8 +976,8 @@ describe('InputPrompt', () => { mockBuffer.lines = ['@src/my\\ file.txt']; mockBuffer.cursor = [0, 16]; // After the escaped space and filename - mockedUseCompletion.mockReturnValue({ - ...mockCompletion, + mockedUseCommandCompletion.mockReturnValue({ + ...mockCommandCompletion, showSuggestions: true, suggestions: [{ label: 'my file.txt', value: 'my file.txt' }], }); @@ -958,11 +985,13 @@ describe('InputPrompt', () => { const { unmount } = render(); await wait(); - expect(mockedUseCompletion).toHaveBeenCalledWith( + expect(mockedUseCommandCompletion).toHaveBeenCalledWith( mockBuffer, + ['/test/project/src'], path.join('test', 'project', 'src'), mockSlashCommands, mockCommandContext, + false, expect.any(Object), ); @@ -975,8 +1004,8 @@ describe('InputPrompt', () => { mockBuffer.lines = ['@path/my\\ file.txt hello']; mockBuffer.cursor = [0, 24]; // After "hello" - mockedUseCompletion.mockReturnValue({ - ...mockCompletion, + mockedUseCommandCompletion.mockReturnValue({ + ...mockCommandCompletion, showSuggestions: false, suggestions: [], }); @@ -984,11 +1013,13 @@ describe('InputPrompt', () => { const { unmount } = render(); await wait(); - expect(mockedUseCompletion).toHaveBeenCalledWith( + expect(mockedUseCommandCompletion).toHaveBeenCalledWith( mockBuffer, + ['/test/project/src'], path.join('test', 'project', 'src'), mockSlashCommands, mockCommandContext, + false, expect.any(Object), ); @@ -1001,8 +1032,8 @@ describe('InputPrompt', () => { mockBuffer.lines = ['@docs/my\\ long\\ file\\ name.md']; mockBuffer.cursor = [0, 29]; // At the end - mockedUseCompletion.mockReturnValue({ - ...mockCompletion, + mockedUseCommandCompletion.mockReturnValue({ + ...mockCommandCompletion, showSuggestions: true, suggestions: [ { label: 'my long file name.md', value: 'my long file name.md' }, @@ -1012,11 +1043,13 @@ describe('InputPrompt', () => { const { unmount } = render(); await wait(); - expect(mockedUseCompletion).toHaveBeenCalledWith( + expect(mockedUseCommandCompletion).toHaveBeenCalledWith( mockBuffer, + ['/test/project/src'], path.join('test', 'project', 'src'), mockSlashCommands, mockCommandContext, + false, expect.any(Object), ); @@ -1029,8 +1062,8 @@ describe('InputPrompt', () => { mockBuffer.lines = ['/memory\\ test']; mockBuffer.cursor = [0, 13]; // At the end - mockedUseCompletion.mockReturnValue({ - ...mockCompletion, + mockedUseCommandCompletion.mockReturnValue({ + ...mockCommandCompletion, showSuggestions: true, suggestions: [{ label: 'test-command', value: 'test-command' }], }); @@ -1038,11 +1071,13 @@ describe('InputPrompt', () => { const { unmount } = render(); await wait(); - expect(mockedUseCompletion).toHaveBeenCalledWith( + expect(mockedUseCommandCompletion).toHaveBeenCalledWith( mockBuffer, + ['/test/project/src'], path.join('test', 'project', 'src'), mockSlashCommands, mockCommandContext, + false, expect.any(Object), ); @@ -1055,8 +1090,8 @@ describe('InputPrompt', () => { mockBuffer.lines = ['@' + path.join('files', 'emoji\\ 👍\\ test.txt')]; mockBuffer.cursor = [0, 25]; // After the escaped space and emoji - mockedUseCompletion.mockReturnValue({ - ...mockCompletion, + mockedUseCommandCompletion.mockReturnValue({ + ...mockCommandCompletion, showSuggestions: true, suggestions: [ { label: 'emoji 👍 test.txt', value: 'emoji 👍 test.txt' }, @@ -1066,11 +1101,13 @@ describe('InputPrompt', () => { const { unmount } = render(); await wait(); - expect(mockedUseCompletion).toHaveBeenCalledWith( + expect(mockedUseCommandCompletion).toHaveBeenCalledWith( mockBuffer, + ['/test/project/src'], path.join('test', 'project', 'src'), mockSlashCommands, mockCommandContext, + false, expect.any(Object), ); @@ -1152,4 +1189,92 @@ describe('InputPrompt', () => { unmount(); }); }); + + describe('reverse search', () => { + beforeEach(async () => { + props.shellModeActive = true; + + vi.mocked(useShellHistory).mockReturnValue({ + history: ['echo hello', 'echo world', 'ls'], + getPreviousCommand: vi.fn(), + getNextCommand: vi.fn(), + addCommandToHistory: vi.fn(), + resetHistoryPosition: vi.fn(), + }); + }); + + it('invokes reverse search on Ctrl+R', async () => { + const { stdin, stdout, unmount } = render(); + await wait(); + + stdin.write('\x12'); + await wait(); + + const frame = stdout.lastFrame(); + expect(frame).toContain('(r:)'); + expect(frame).toContain('echo hello'); + expect(frame).toContain('echo world'); + expect(frame).toContain('ls'); + + unmount(); + }); + + it('resets reverse search state on Escape', async () => { + const { stdin, stdout, unmount } = render(); + await wait(); + + stdin.write('\x12'); + await wait(); + stdin.write('\x1B'); + await wait(); + + const frame = stdout.lastFrame(); + expect(frame).not.toContain('(r:)'); + expect(frame).not.toContain('echo hello'); + + unmount(); + }); + + it('completes the highlighted entry on Tab and exits reverse-search', async () => { + const { stdin, stdout, unmount } = render(); + stdin.write('\x12'); + await wait(); + stdin.write('\t'); + await wait(); + + expect(stdout.lastFrame()).not.toContain('(r:)'); + expect(props.buffer.setText).toHaveBeenCalledWith('echo hello'); + unmount(); + }); + + it('submits the highlighted entry on Enter and exits reverse-search', async () => { + const { stdin, stdout, unmount } = render(); + stdin.write('\x12'); + await wait(); + expect(stdout.lastFrame()).toContain('(r:)'); + stdin.write('\r'); + await wait(); + + expect(stdout.lastFrame()).not.toContain('(r:)'); + expect(props.onSubmit).toHaveBeenCalledWith('echo hello'); + unmount(); + }); + + it('text and cursor position should be restored after reverse search', async () => { + props.buffer.setText('initial text'); + props.buffer.cursor = [0, 3]; + const { stdin, stdout, unmount } = render(); + stdin.write('\x12'); + await wait(); + expect(stdout.lastFrame()).toContain('(r:)'); + stdin.write('\x1B'); + await wait(); + + expect(stdout.lastFrame()).not.toContain('(r:)'); + expect(props.buffer.text).toBe('initial text'); + expect(props.buffer.cursor).toEqual([0, 3]); + + unmount(); + }); + }); }); diff --git a/packages/cli/src/ui/components/InputPrompt.tsx b/packages/cli/src/ui/components/InputPrompt.tsx index 00b6ac81..b09bc3b9 100644 --- a/packages/cli/src/ui/components/InputPrompt.tsx +++ b/packages/cli/src/ui/components/InputPrompt.tsx @@ -9,12 +9,13 @@ import { Box, Text } from 'ink'; import { Colors } from '../colors.js'; import { SuggestionsDisplay } from './SuggestionsDisplay.js'; import { useInputHistory } from '../hooks/useInputHistory.js'; -import { TextBuffer } from './shared/text-buffer.js'; +import { TextBuffer, logicalPosToOffset } from './shared/text-buffer.js'; import { cpSlice, cpLen } from '../utils/textUtils.js'; import chalk from 'chalk'; import stringWidth from 'string-width'; import { useShellHistory } from '../hooks/useShellHistory.js'; -import { useCompletion } from '../hooks/useCompletion.js'; +import { useReverseSearchCompletion } from '../hooks/useReverseSearchCompletion.js'; +import { useCommandCompletion } from '../hooks/useCommandCompletion.js'; import { useKeypress, Key } from '../hooks/useKeypress.js'; import { CommandContext, SlashCommand } from '../commands/types.js'; import { Config } from '@qwen-code/qwen-code-core'; @@ -60,16 +61,41 @@ export const InputPrompt: React.FC = ({ }) => { const [justNavigatedHistory, setJustNavigatedHistory] = useState(false); - const completion = useCompletion( + const [dirs, setDirs] = useState( + config.getWorkspaceContext().getDirectories(), + ); + const dirsChanged = config.getWorkspaceContext().getDirectories(); + useEffect(() => { + if (dirs.length !== dirsChanged.length) { + setDirs(dirsChanged); + } + }, [dirs.length, dirsChanged]); + const [reverseSearchActive, setReverseSearchActive] = useState(false); + const [textBeforeReverseSearch, setTextBeforeReverseSearch] = useState(''); + const [cursorPosition, setCursorPosition] = useState<[number, number]>([ + 0, 0, + ]); + const shellHistory = useShellHistory(config.getProjectRoot()); + const historyData = shellHistory.history; + + const completion = useCommandCompletion( buffer, + dirs, config.getTargetDir(), slashCommands, commandContext, + reverseSearchActive, config, ); + const reverseSearchCompletion = useReverseSearchCompletion( + buffer, + historyData, + reverseSearchActive, + ); const resetCompletionState = completion.resetCompletionState; - const shellHistory = useShellHistory(config.getProjectRoot()); + const resetReverseSearchCompletionState = + reverseSearchCompletion.resetCompletionState; const handleSubmitAndClear = useCallback( (submittedValue: string) => { @@ -81,8 +107,16 @@ export const InputPrompt: React.FC = ({ buffer.setText(''); onSubmit(submittedValue); resetCompletionState(); + resetReverseSearchCompletionState(); }, - [onSubmit, buffer, resetCompletionState, shellModeActive, shellHistory], + [ + onSubmit, + buffer, + resetCompletionState, + shellModeActive, + shellHistory, + resetReverseSearchCompletionState, + ], ); const customSetTextAndResetCompletionSignal = useCallback( @@ -107,6 +141,7 @@ export const InputPrompt: React.FC = ({ useEffect(() => { if (justNavigatedHistory) { resetCompletionState(); + resetReverseSearchCompletionState(); setJustNavigatedHistory(false); } }, [ @@ -114,6 +149,7 @@ export const InputPrompt: React.FC = ({ buffer.text, resetCompletionState, setJustNavigatedHistory, + resetReverseSearchCompletionState, ]); // Handle clipboard image pasting with Ctrl+V @@ -186,6 +222,19 @@ export const InputPrompt: React.FC = ({ } if (key.name === 'escape') { + if (reverseSearchActive) { + setReverseSearchActive(false); + reverseSearchCompletion.resetCompletionState(); + buffer.setText(textBeforeReverseSearch); + const offset = logicalPosToOffset( + buffer.lines, + cursorPosition[0], + cursorPosition[1], + ); + buffer.moveToOffset(offset); + return; + } + if (shellModeActive) { setShellModeActive(false); return; @@ -197,11 +246,61 @@ export const InputPrompt: React.FC = ({ } } + if (shellModeActive && key.ctrl && key.name === 'r') { + setReverseSearchActive(true); + setTextBeforeReverseSearch(buffer.text); + setCursorPosition(buffer.cursor); + return; + } + if (key.ctrl && key.name === 'l') { onClearScreen(); return; } + if (reverseSearchActive) { + const { + activeSuggestionIndex, + navigateUp, + navigateDown, + showSuggestions, + suggestions, + } = reverseSearchCompletion; + + if (showSuggestions) { + if (key.name === 'up') { + navigateUp(); + return; + } + if (key.name === 'down') { + navigateDown(); + return; + } + if (key.name === 'tab') { + reverseSearchCompletion.handleAutocomplete(activeSuggestionIndex); + reverseSearchCompletion.resetCompletionState(); + setReverseSearchActive(false); + return; + } + } + + if (key.name === 'return' && !key.ctrl) { + const textToSubmit = + showSuggestions && activeSuggestionIndex > -1 + ? suggestions[activeSuggestionIndex].value + : buffer.text; + handleSubmitAndClear(textToSubmit); + reverseSearchCompletion.resetCompletionState(); + setReverseSearchActive(false); + return; + } + + // Prevent up/down from falling through to regular history navigation + if (key.name === 'up' || key.name === 'down') { + return; + } + } + // If the command is a perfect match, pressing enter should execute it. if (completion.isPerfectMatch && key.name === 'return') { handleSubmitAndClear(buffer.text); @@ -261,7 +360,6 @@ export const InputPrompt: React.FC = ({ return; } } else { - // Shell History Navigation if (key.name === 'up') { const prevCommand = shellHistory.getPreviousCommand(); if (prevCommand !== null) buffer.setText(prevCommand); @@ -273,7 +371,6 @@ export const InputPrompt: React.FC = ({ return; } } - if (key.name === 'return' && !key.ctrl && !key.meta && !key.paste) { if (buffer.text.trim()) { const [row, col] = buffer.cursor; @@ -351,9 +448,13 @@ export const InputPrompt: React.FC = ({ inputHistory, handleSubmitAndClear, shellHistory, + reverseSearchCompletion, handleClipboardImage, resetCompletionState, vimHandleInput, + reverseSearchActive, + textBeforeReverseSearch, + cursorPosition, ], ); @@ -374,7 +475,15 @@ export const InputPrompt: React.FC = ({ - {shellModeActive ? '! ' : '> '} + {shellModeActive ? ( + reverseSearchActive ? ( + (r:) + ) : ( + '! ' + ) + ) : ( + '> ' + )} {buffer.text.length === 0 && placeholder ? ( @@ -438,6 +547,18 @@ export const InputPrompt: React.FC = ({ /> )} + {reverseSearchActive && ( + + + + )} ); }; diff --git a/packages/cli/src/ui/components/ModelStatsDisplay.test.tsx b/packages/cli/src/ui/components/ModelStatsDisplay.test.tsx index 57382d91..6adf2652 100644 --- a/packages/cli/src/ui/components/ModelStatsDisplay.test.tsx +++ b/packages/cli/src/ui/components/ModelStatsDisplay.test.tsx @@ -5,7 +5,7 @@ */ import { render } from 'ink-testing-library'; -import { describe, it, expect, vi } from 'vitest'; +import { describe, it, expect, vi, beforeAll, afterAll } from 'vitest'; import { ModelStatsDisplay } from './ModelStatsDisplay.js'; import * as SessionContext from '../contexts/SessionContext.js'; import { SessionMetrics } from '../contexts/SessionContext.js'; @@ -38,6 +38,19 @@ const renderWithMockedStats = (metrics: SessionMetrics) => { }; describe('', () => { + beforeAll(() => { + vi.spyOn(Number.prototype, 'toLocaleString').mockImplementation(function ( + this: number, + ) { + // Use a stable 'en-US' format for test consistency. + return new Intl.NumberFormat('en-US').format(this); + }); + }); + + afterAll(() => { + vi.restoreAllMocks(); + }); + it('should render "no API calls" message when there are no active models', () => { const { lastFrame } = renderWithMockedStats({ models: {}, diff --git a/packages/cli/src/ui/components/PrepareLabel.tsx b/packages/cli/src/ui/components/PrepareLabel.tsx new file mode 100644 index 00000000..652a77a6 --- /dev/null +++ b/packages/cli/src/ui/components/PrepareLabel.tsx @@ -0,0 +1,48 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import React from 'react'; +import { Text } from 'ink'; +import { Colors } from '../colors.js'; + +interface PrepareLabelProps { + label: string; + matchedIndex?: number; + userInput: string; + textColor: string; + highlightColor?: string; +} + +export const PrepareLabel: React.FC = ({ + label, + matchedIndex, + userInput, + textColor, + highlightColor = Colors.AccentYellow, +}) => { + if ( + matchedIndex === undefined || + matchedIndex < 0 || + matchedIndex >= label.length || + userInput.length === 0 + ) { + return {label}; + } + + const start = label.slice(0, matchedIndex); + const match = label.slice(matchedIndex, matchedIndex + userInput.length); + const end = label.slice(matchedIndex + userInput.length); + + return ( + + {start} + + {match} + + {end} + + ); +}; diff --git a/packages/cli/src/ui/components/SuggestionsDisplay.tsx b/packages/cli/src/ui/components/SuggestionsDisplay.tsx index 0620665f..9c4b5687 100644 --- a/packages/cli/src/ui/components/SuggestionsDisplay.tsx +++ b/packages/cli/src/ui/components/SuggestionsDisplay.tsx @@ -6,10 +6,12 @@ import { Box, Text } from 'ink'; import { Colors } from '../colors.js'; +import { PrepareLabel } from './PrepareLabel.js'; export interface Suggestion { label: string; value: string; description?: string; + matchedIndex?: number; } interface SuggestionsDisplayProps { suggestions: Suggestion[]; @@ -58,18 +60,25 @@ export function SuggestionsDisplay({ const originalIndex = startIndex + index; const isActive = originalIndex === activeIndex; const textColor = isActive ? Colors.AccentPurple : Colors.Gray; + const labelElement = ( + + ); return ( - + {userInput.startsWith('/') ? ( // only use box model for (/) command mode - {suggestion.label} + {labelElement} ) : ( - // use regular text for other modes (@ context) - {suggestion.label} + labelElement )} {suggestion.description ? ( diff --git a/packages/cli/src/ui/components/messages/ToolConfirmationMessage.tsx b/packages/cli/src/ui/components/messages/ToolConfirmationMessage.tsx index e6f718c0..7bb78eeb 100644 --- a/packages/cli/src/ui/components/messages/ToolConfirmationMessage.tsx +++ b/packages/cli/src/ui/components/messages/ToolConfirmationMessage.tsx @@ -118,7 +118,10 @@ export const ToolConfirmationMessage: React.FC< label: 'Modify with external editor', value: ToolConfirmationOutcome.ModifyWithEditor, }, - { label: 'No (esc)', value: ToolConfirmationOutcome.Cancel }, + { + label: 'No, suggest changes (esc)', + value: ToolConfirmationOutcome.Cancel, + }, ); bodyContent = ( = ({ text }) => { const prefix = '> '; const prefixWidth = prefix.length; + const isSlashCommand = text.startsWith('/'); + + const textColor = isSlashCommand ? Colors.AccentPurple : Colors.Gray; + const borderColor = isSlashCommand ? Colors.AccentPurple : Colors.Gray; return ( = ({ text }) => { alignSelf="flex-start" > - {prefix} + {prefix} - + {text} diff --git a/packages/cli/src/ui/components/shared/text-buffer.test.ts b/packages/cli/src/ui/components/shared/text-buffer.test.ts index 807c33df..cbceedbc 100644 --- a/packages/cli/src/ui/components/shared/text-buffer.test.ts +++ b/packages/cli/src/ui/components/shared/text-buffer.test.ts @@ -32,6 +32,7 @@ describe('textBufferReducer', () => { it('should return the initial state if state is undefined', () => { const action = { type: 'unknown_action' } as unknown as TextBufferAction; const state = textBufferReducer(initialState, action); + expect(state).toHaveOnlyValidCharacters(); expect(state).toEqual(initialState); }); @@ -42,6 +43,7 @@ describe('textBufferReducer', () => { payload: 'hello\nworld', }; const state = textBufferReducer(initialState, action); + expect(state).toHaveOnlyValidCharacters(); expect(state.lines).toEqual(['hello', 'world']); expect(state.cursorRow).toBe(1); expect(state.cursorCol).toBe(5); @@ -55,6 +57,7 @@ describe('textBufferReducer', () => { pushToUndo: false, }; const state = textBufferReducer(initialState, action); + expect(state).toHaveOnlyValidCharacters(); expect(state.lines).toEqual(['no undo']); expect(state.undoStack.length).toBe(0); }); @@ -64,6 +67,7 @@ describe('textBufferReducer', () => { it('should insert a character', () => { const action: TextBufferAction = { type: 'insert', payload: 'a' }; const state = textBufferReducer(initialState, action); + expect(state).toHaveOnlyValidCharacters(); expect(state.lines).toEqual(['a']); expect(state.cursorCol).toBe(1); }); @@ -72,6 +76,7 @@ describe('textBufferReducer', () => { const stateWithText = { ...initialState, lines: ['hello'] }; const action: TextBufferAction = { type: 'insert', payload: '\n' }; const state = textBufferReducer(stateWithText, action); + expect(state).toHaveOnlyValidCharacters(); expect(state.lines).toEqual(['', 'hello']); expect(state.cursorRow).toBe(1); expect(state.cursorCol).toBe(0); @@ -88,6 +93,7 @@ describe('textBufferReducer', () => { }; const action: TextBufferAction = { type: 'backspace' }; const state = textBufferReducer(stateWithText, action); + expect(state).toHaveOnlyValidCharacters(); expect(state.lines).toEqual(['']); expect(state.cursorCol).toBe(0); }); @@ -101,6 +107,7 @@ describe('textBufferReducer', () => { }; const action: TextBufferAction = { type: 'backspace' }; const state = textBufferReducer(stateWithText, action); + expect(state).toHaveOnlyValidCharacters(); expect(state.lines).toEqual(['helloworld']); expect(state.cursorRow).toBe(0); expect(state.cursorCol).toBe(5); @@ -115,12 +122,14 @@ describe('textBufferReducer', () => { payload: 'test', }; const stateAfterInsert = textBufferReducer(initialState, insertAction); + expect(stateAfterInsert).toHaveOnlyValidCharacters(); expect(stateAfterInsert.lines).toEqual(['test']); expect(stateAfterInsert.undoStack.length).toBe(1); // 2. Undo const undoAction: TextBufferAction = { type: 'undo' }; const stateAfterUndo = textBufferReducer(stateAfterInsert, undoAction); + expect(stateAfterUndo).toHaveOnlyValidCharacters(); expect(stateAfterUndo.lines).toEqual(['']); expect(stateAfterUndo.undoStack.length).toBe(0); expect(stateAfterUndo.redoStack.length).toBe(1); @@ -128,6 +137,7 @@ describe('textBufferReducer', () => { // 3. Redo const redoAction: TextBufferAction = { type: 'redo' }; const stateAfterRedo = textBufferReducer(stateAfterUndo, redoAction); + expect(stateAfterRedo).toHaveOnlyValidCharacters(); expect(stateAfterRedo.lines).toEqual(['test']); expect(stateAfterRedo.undoStack.length).toBe(1); expect(stateAfterRedo.redoStack.length).toBe(0); @@ -144,6 +154,7 @@ describe('textBufferReducer', () => { }; const action: TextBufferAction = { type: 'create_undo_snapshot' }; const state = textBufferReducer(stateWithText, action); + expect(state).toHaveOnlyValidCharacters(); expect(state.lines).toEqual(['hello']); expect(state.cursorRow).toBe(0); @@ -157,16 +168,19 @@ describe('textBufferReducer', () => { }); // Helper to get the state from the hook -const getBufferState = (result: { current: TextBuffer }) => ({ - text: result.current.text, - lines: [...result.current.lines], // Clone for safety - cursor: [...result.current.cursor] as [number, number], - allVisualLines: [...result.current.allVisualLines], - viewportVisualLines: [...result.current.viewportVisualLines], - visualCursor: [...result.current.visualCursor] as [number, number], - visualScrollRow: result.current.visualScrollRow, - preferredCol: result.current.preferredCol, -}); +const getBufferState = (result: { current: TextBuffer }) => { + expect(result.current).toHaveOnlyValidCharacters(); + return { + text: result.current.text, + lines: [...result.current.lines], // Clone for safety + cursor: [...result.current.cursor] as [number, number], + allVisualLines: [...result.current.allVisualLines], + viewportVisualLines: [...result.current.viewportVisualLines], + visualCursor: [...result.current.visualCursor] as [number, number], + visualScrollRow: result.current.visualScrollRow, + preferredCol: result.current.preferredCol, + }; +}; describe('useTextBuffer', () => { let viewport: Viewport; @@ -1152,6 +1166,22 @@ Contrary to popular belief, Lorem Ipsum is not simply random text. It has roots expect(state.text).toBe('fiXrd'); expect(state.cursor).toEqual([0, 3]); // After 'X' }); + + it('should replace a single-line range with multi-line text', () => { + const { result } = renderHook(() => + useTextBuffer({ + initialText: 'one two three', + viewport, + isValidPath: () => false, + }), + ); + // Replace "two" with "new\nline" + act(() => result.current.replaceRange(0, 4, 0, 7, 'new\nline')); + const state = getBufferState(result); + expect(state.lines).toEqual(['one new', 'line three']); + expect(state.text).toBe('one new\nline three'); + expect(state.cursor).toEqual([1, 4]); // cursor after 'line' + }); }); describe('Input Sanitization', () => { @@ -1159,7 +1189,7 @@ Contrary to popular belief, Lorem Ipsum is not simply random text. It has roots const { result } = renderHook(() => useTextBuffer({ viewport, isValidPath: () => false }), ); - const textWithAnsi = '\x1B[31mHello\x1B[0m'; + const textWithAnsi = '\x1B[31mHello\x1B[0m \x1B[32mWorld\x1B[0m'; act(() => result.current.handleInput({ name: '', @@ -1170,7 +1200,7 @@ Contrary to popular belief, Lorem Ipsum is not simply random text. It has roots sequence: textWithAnsi, }), ); - expect(getBufferState(result).text).toBe('Hello'); + expect(getBufferState(result).text).toBe('Hello World'); }); it('should strip control characters from input', () => { @@ -1425,6 +1455,7 @@ describe('textBufferReducer vim operations', () => { }; const result = textBufferReducer(initialState, action); + expect(result).toHaveOnlyValidCharacters(); // After deleting line2, we should have line1 and line3, with cursor on line3 (now at index 1) expect(result.lines).toEqual(['line1', 'line3']); @@ -1452,6 +1483,7 @@ describe('textBufferReducer vim operations', () => { }; const result = textBufferReducer(initialState, action); + expect(result).toHaveOnlyValidCharacters(); // Should delete line2 and line3, leaving line1 and line4 expect(result.lines).toEqual(['line1', 'line4']); @@ -1479,6 +1511,7 @@ describe('textBufferReducer vim operations', () => { }; const result = textBufferReducer(initialState, action); + expect(result).toHaveOnlyValidCharacters(); // Should clear the line content but keep the line expect(result.lines).toEqual(['']); @@ -1506,6 +1539,7 @@ describe('textBufferReducer vim operations', () => { }; const result = textBufferReducer(initialState, action); + expect(result).toHaveOnlyValidCharacters(); // Should delete the last line completely, not leave empty line expect(result.lines).toEqual(['line1']); @@ -1534,6 +1568,7 @@ describe('textBufferReducer vim operations', () => { }; const afterDelete = textBufferReducer(initialState, deleteAction); + expect(afterDelete).toHaveOnlyValidCharacters(); // After deleting all lines, should have one empty line expect(afterDelete.lines).toEqual(['']); @@ -1547,6 +1582,7 @@ describe('textBufferReducer vim operations', () => { }; const afterPaste = textBufferReducer(afterDelete, pasteAction); + expect(afterPaste).toHaveOnlyValidCharacters(); // All lines including the first one should be present expect(afterPaste.lines).toEqual(['new1', 'new2', 'new3', 'new4']); diff --git a/packages/cli/src/ui/components/shared/text-buffer.ts b/packages/cli/src/ui/components/shared/text-buffer.ts index 9ed742d8..273d1ce0 100644 --- a/packages/cli/src/ui/components/shared/text-buffer.ts +++ b/packages/cli/src/ui/components/shared/text-buffer.ts @@ -271,26 +271,23 @@ export const replaceRangeInternal = ( .replace(/\r/g, '\n'); const replacementParts = normalisedReplacement.split('\n'); - // Replace the content - if (startRow === endRow) { - newLines[startRow] = prefix + normalisedReplacement + suffix; + // The combined first line of the new text + const firstLine = prefix + replacementParts[0]; + + if (replacementParts.length === 1) { + // No newlines in replacement: combine prefix, replacement, and suffix on one line. + newLines.splice(startRow, endRow - startRow + 1, firstLine + suffix); } else { - const firstLine = prefix + replacementParts[0]; - if (replacementParts.length === 1) { - // Single line of replacement text, but spanning multiple original lines - newLines.splice(startRow, endRow - startRow + 1, firstLine + suffix); - } else { - // Multi-line replacement text - const lastLine = replacementParts[replacementParts.length - 1] + suffix; - const middleLines = replacementParts.slice(1, -1); - newLines.splice( - startRow, - endRow - startRow + 1, - firstLine, - ...middleLines, - lastLine, - ); - } + // Newlines in replacement: create new lines. + const lastLine = replacementParts[replacementParts.length - 1] + suffix; + const middleLines = replacementParts.slice(1, -1); + newLines.splice( + startRow, + endRow - startRow + 1, + firstLine, + ...middleLines, + lastLine, + ); } const finalCursorRow = startRow + replacementParts.length - 1; diff --git a/packages/cli/src/ui/components/shared/vim-buffer-actions.test.ts b/packages/cli/src/ui/components/shared/vim-buffer-actions.test.ts index f268bb1e..8f7f72ab 100644 --- a/packages/cli/src/ui/components/shared/vim-buffer-actions.test.ts +++ b/packages/cli/src/ui/components/shared/vim-buffer-actions.test.ts @@ -36,7 +36,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorCol).toBe(2); expect(result.preferredCol).toBeNull(); }); @@ -49,7 +49,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorCol).toBe(0); }); @@ -61,7 +61,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorRow).toBe(0); expect(result.cursorCol).toBe(4); // On last character '1' of 'line1' }); @@ -74,7 +74,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorRow).toBe(0); expect(result.cursorCol).toBe(1); // On 'b' after 5 left movements }); @@ -88,6 +88,7 @@ describe('vim-buffer-actions', () => { type: 'vim_move_right' as const, payload: { count: 1 }, }); + expect(state).toHaveOnlyValidCharacters(); expect(state.cursorRow).toBe(1); expect(state.cursorCol).toBe(0); // Should be on 'f' @@ -96,6 +97,7 @@ describe('vim-buffer-actions', () => { type: 'vim_move_left' as const, payload: { count: 1 }, }); + expect(state).toHaveOnlyValidCharacters(); expect(state.cursorRow).toBe(0); expect(state.cursorCol).toBe(10); // Should be on 'd', not past it }); @@ -110,7 +112,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorCol).toBe(5); }); @@ -122,7 +124,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorCol).toBe(4); // Last character of 'hello' }); @@ -134,7 +136,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorRow).toBe(1); expect(result.cursorCol).toBe(0); }); @@ -146,7 +148,7 @@ describe('vim-buffer-actions', () => { const action = { type: 'vim_move_up' as const, payload: { count: 2 } }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorRow).toBe(0); expect(result.cursorCol).toBe(3); }); @@ -156,7 +158,7 @@ describe('vim-buffer-actions', () => { const action = { type: 'vim_move_up' as const, payload: { count: 5 } }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorRow).toBe(0); }); @@ -165,7 +167,7 @@ describe('vim-buffer-actions', () => { const action = { type: 'vim_move_up' as const, payload: { count: 1 } }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorRow).toBe(0); expect(result.cursorCol).toBe(5); // End of 'short' }); @@ -180,7 +182,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorRow).toBe(2); expect(result.cursorCol).toBe(2); }); @@ -193,7 +195,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorRow).toBe(1); }); }); @@ -207,7 +209,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorCol).toBe(6); // Start of 'world' }); @@ -219,7 +221,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorCol).toBe(12); // Start of 'test' }); @@ -231,7 +233,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorCol).toBe(5); // Start of ',' }); }); @@ -245,7 +247,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorCol).toBe(6); // Start of 'world' }); @@ -257,7 +259,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorCol).toBe(0); // Start of 'hello' }); }); @@ -271,7 +273,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorCol).toBe(4); // End of 'hello' }); @@ -283,7 +285,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorCol).toBe(10); // End of 'world' }); }); @@ -294,7 +296,7 @@ describe('vim-buffer-actions', () => { const action = { type: 'vim_move_to_line_start' as const }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorCol).toBe(0); }); @@ -303,7 +305,7 @@ describe('vim-buffer-actions', () => { const action = { type: 'vim_move_to_line_end' as const }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorCol).toBe(10); // Last character of 'hello world' }); @@ -312,7 +314,7 @@ describe('vim-buffer-actions', () => { const action = { type: 'vim_move_to_first_nonwhitespace' as const }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorCol).toBe(3); // Position of 'h' }); @@ -321,7 +323,7 @@ describe('vim-buffer-actions', () => { const action = { type: 'vim_move_to_first_line' as const }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorRow).toBe(0); expect(result.cursorCol).toBe(0); }); @@ -331,7 +333,7 @@ describe('vim-buffer-actions', () => { const action = { type: 'vim_move_to_last_line' as const }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorRow).toBe(2); expect(result.cursorCol).toBe(0); }); @@ -344,7 +346,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorRow).toBe(1); // 0-indexed expect(result.cursorCol).toBe(0); }); @@ -357,7 +359,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorRow).toBe(1); // Last line }); }); @@ -373,7 +375,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.lines[0]).toBe('hllo'); expect(result.cursorCol).toBe(1); }); @@ -386,7 +388,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.lines[0]).toBe('ho'); expect(result.cursorCol).toBe(1); }); @@ -399,7 +401,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.lines[0]).toBe('hel'); expect(result.cursorCol).toBe(3); }); @@ -412,7 +414,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.lines[0]).toBe('hello'); expect(result.cursorCol).toBe(5); }); @@ -427,7 +429,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.lines[0]).toBe('world test'); expect(result.cursorCol).toBe(0); }); @@ -440,7 +442,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.lines[0]).toBe('test'); expect(result.cursorCol).toBe(0); }); @@ -453,7 +455,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.lines[0]).toBe('hello '); expect(result.cursorCol).toBe(6); }); @@ -468,7 +470,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.lines[0]).toBe('hello test'); expect(result.cursorCol).toBe(6); }); @@ -481,7 +483,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.lines[0]).toBe('test'); expect(result.cursorCol).toBe(0); }); @@ -496,7 +498,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.lines).toEqual(['line1', 'line3']); expect(result.cursorRow).toBe(1); expect(result.cursorCol).toBe(0); @@ -510,7 +512,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.lines).toEqual(['line3']); expect(result.cursorRow).toBe(0); expect(result.cursorCol).toBe(0); @@ -524,7 +526,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.lines).toEqual(['']); expect(result.cursorRow).toBe(0); expect(result.cursorCol).toBe(0); @@ -537,7 +539,7 @@ describe('vim-buffer-actions', () => { const action = { type: 'vim_delete_to_end_of_line' as const }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.lines[0]).toBe('hello'); expect(result.cursorCol).toBe(5); }); @@ -547,7 +549,7 @@ describe('vim-buffer-actions', () => { const action = { type: 'vim_delete_to_end_of_line' as const }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.lines[0]).toBe('hello'); }); }); @@ -560,7 +562,7 @@ describe('vim-buffer-actions', () => { const action = { type: 'vim_insert_at_cursor' as const }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorRow).toBe(0); expect(result.cursorCol).toBe(2); }); @@ -572,7 +574,7 @@ describe('vim-buffer-actions', () => { const action = { type: 'vim_append_at_cursor' as const }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorCol).toBe(3); }); @@ -581,7 +583,7 @@ describe('vim-buffer-actions', () => { const action = { type: 'vim_append_at_cursor' as const }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorCol).toBe(5); }); }); @@ -592,7 +594,7 @@ describe('vim-buffer-actions', () => { const action = { type: 'vim_append_at_line_end' as const }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorCol).toBe(11); }); }); @@ -603,7 +605,7 @@ describe('vim-buffer-actions', () => { const action = { type: 'vim_insert_at_line_start' as const }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorCol).toBe(2); }); @@ -612,34 +614,32 @@ describe('vim-buffer-actions', () => { const action = { type: 'vim_insert_at_line_start' as const }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorCol).toBe(3); }); }); describe('vim_open_line_below', () => { - it('should insert newline at end of current line', () => { + it('should insert a new line below the current one', () => { const state = createTestState(['hello world'], 0, 5); const action = { type: 'vim_open_line_below' as const }; const result = handleVimAction(state, action); - - // The implementation inserts newline at end of current line and cursor moves to column 0 - expect(result.lines[0]).toBe('hello world\n'); - expect(result.cursorRow).toBe(0); - expect(result.cursorCol).toBe(0); // Cursor position after replaceRangeInternal + expect(result).toHaveOnlyValidCharacters(); + expect(result.lines).toEqual(['hello world', '']); + expect(result.cursorRow).toBe(1); + expect(result.cursorCol).toBe(0); }); }); describe('vim_open_line_above', () => { - it('should insert newline before current line', () => { + it('should insert a new line above the current one', () => { const state = createTestState(['hello', 'world'], 1, 2); const action = { type: 'vim_open_line_above' as const }; const result = handleVimAction(state, action); - - // The implementation inserts newline at beginning of current line - expect(result.lines).toEqual(['hello', '\nworld']); + expect(result).toHaveOnlyValidCharacters(); + expect(result.lines).toEqual(['hello', '', 'world']); expect(result.cursorRow).toBe(1); expect(result.cursorCol).toBe(0); }); @@ -651,7 +651,7 @@ describe('vim-buffer-actions', () => { const action = { type: 'vim_escape_insert_mode' as const }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorCol).toBe(2); }); @@ -660,7 +660,7 @@ describe('vim-buffer-actions', () => { const action = { type: 'vim_escape_insert_mode' as const }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorCol).toBe(0); }); }); @@ -676,7 +676,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.lines[0]).toBe('world test'); expect(result.cursorCol).toBe(0); }); @@ -691,7 +691,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.lines[0]).toBe(''); expect(result.cursorCol).toBe(0); }); @@ -706,7 +706,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.lines[0]).toBe('hel world'); expect(result.cursorCol).toBe(3); }); @@ -719,7 +719,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.lines[0]).toBe('hellorld'); // Deletes ' wo' (3 chars to the right) expect(result.cursorCol).toBe(5); }); @@ -732,7 +732,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); // The movement 'j' with count 2 changes 2 lines starting from cursor row // Since we're at cursor position 2, it changes lines starting from current row expect(result.lines).toEqual(['line1', 'line2', 'line3']); // No change because count > available lines @@ -751,7 +751,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorRow).toBe(0); expect(result.cursorCol).toBe(0); }); @@ -761,7 +761,7 @@ describe('vim-buffer-actions', () => { const action = { type: 'vim_move_to_line_end' as const }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.cursorCol).toBe(0); // Should be last character position }); @@ -773,7 +773,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); // Should move to next line with content expect(result.cursorRow).toBe(2); expect(result.cursorCol).toBe(0); @@ -789,7 +789,7 @@ describe('vim-buffer-actions', () => { }; const result = handleVimAction(state, action); - + expect(result).toHaveOnlyValidCharacters(); expect(result.undoStack).toHaveLength(2); // Original plus new snapshot }); }); diff --git a/packages/cli/src/ui/editors/editorSettingsManager.ts b/packages/cli/src/ui/editors/editorSettingsManager.ts index ccc0e0b6..8f5c3710 100644 --- a/packages/cli/src/ui/editors/editorSettingsManager.ts +++ b/packages/cli/src/ui/editors/editorSettingsManager.ts @@ -17,28 +17,23 @@ export interface EditorDisplay { } export const EDITOR_DISPLAY_NAMES: Record = { - zed: 'Zed', + cursor: 'Cursor', + emacs: 'Emacs', + neovim: 'Neovim', + vim: 'Vim', vscode: 'VS Code', vscodium: 'VSCodium', windsurf: 'Windsurf', - cursor: 'Cursor', - vim: 'Vim', - neovim: 'Neovim', + zed: 'Zed', }; class EditorSettingsManager { private readonly availableEditors: EditorDisplay[]; constructor() { - const editorTypes: EditorType[] = [ - 'zed', - 'vscode', - 'vscodium', - 'windsurf', - 'cursor', - 'vim', - 'neovim', - ]; + const editorTypes = Object.keys( + EDITOR_DISPLAY_NAMES, + ).sort() as EditorType[]; this.availableEditors = [ { name: 'None', diff --git a/packages/cli/src/ui/hooks/atCommandProcessor.test.ts b/packages/cli/src/ui/hooks/atCommandProcessor.test.ts index 10ec608d..cbbf7900 100644 --- a/packages/cli/src/ui/hooks/atCommandProcessor.test.ts +++ b/packages/cli/src/ui/hooks/atCommandProcessor.test.ts @@ -57,6 +57,10 @@ describe('handleAtCommand', () => { respectGeminiIgnore: true, }), getEnableRecursiveFileSearch: vi.fn(() => true), + getWorkspaceContext: () => ({ + isPathWithinWorkspace: () => true, + getDirectories: () => [testRootDir], + }), } as unknown as Config; const registry = new ToolRegistry(mockConfig); @@ -685,5 +689,397 @@ describe('handleAtCommand', () => { `Ignored 1 files:\nGemini-ignored: ${geminiIgnoredFile}`, ); }); - // }); + + describe('punctuation termination in @ commands', () => { + const punctuationTestCases = [ + { + name: 'comma', + fileName: 'test.txt', + fileContent: 'File content here', + queryTemplate: (filePath: string) => + `Look at @${filePath}, then explain it.`, + messageId: 400, + }, + { + name: 'period', + fileName: 'readme.md', + fileContent: 'File content here', + queryTemplate: (filePath: string) => + `Check @${filePath}. What does it say?`, + messageId: 401, + }, + { + name: 'semicolon', + fileName: 'example.js', + fileContent: 'Code example', + queryTemplate: (filePath: string) => + `Review @${filePath}; check for bugs.`, + messageId: 402, + }, + { + name: 'exclamation mark', + fileName: 'important.txt', + fileContent: 'Important content', + queryTemplate: (filePath: string) => + `Look at @${filePath}! This is critical.`, + messageId: 403, + }, + { + name: 'question mark', + fileName: 'config.json', + fileContent: 'Config settings', + queryTemplate: (filePath: string) => + `What is in @${filePath}? Please explain.`, + messageId: 404, + }, + { + name: 'opening parenthesis', + fileName: 'func.ts', + fileContent: 'Function definition', + queryTemplate: (filePath: string) => + `Analyze @${filePath}(the main function).`, + messageId: 405, + }, + { + name: 'closing parenthesis', + fileName: 'data.json', + fileContent: 'Test data', + queryTemplate: (filePath: string) => + `Use data from @${filePath}) for testing.`, + messageId: 406, + }, + { + name: 'opening square bracket', + fileName: 'array.js', + fileContent: 'Array data', + queryTemplate: (filePath: string) => + `Check @${filePath}[0] for the first element.`, + messageId: 407, + }, + { + name: 'closing square bracket', + fileName: 'list.md', + fileContent: 'List content', + queryTemplate: (filePath: string) => + `Review item @${filePath}] from the list.`, + messageId: 408, + }, + { + name: 'opening curly brace', + fileName: 'object.ts', + fileContent: 'Object definition', + queryTemplate: (filePath: string) => + `Parse @${filePath}{prop1: value1}.`, + messageId: 409, + }, + { + name: 'closing curly brace', + fileName: 'config.yaml', + fileContent: 'Configuration', + queryTemplate: (filePath: string) => + `Use settings from @${filePath}} for deployment.`, + messageId: 410, + }, + ]; + + it.each(punctuationTestCases)( + 'should terminate @path at $name', + async ({ fileName, fileContent, queryTemplate, messageId }) => { + const filePath = await createTestFile( + path.join(testRootDir, fileName), + fileContent, + ); + const query = queryTemplate(filePath); + + const result = await handleAtCommand({ + query, + config: mockConfig, + addItem: mockAddItem, + onDebugMessage: mockOnDebugMessage, + messageId, + signal: abortController.signal, + }); + + expect(result).toEqual({ + processedQuery: [ + { text: query }, + { text: '\n--- Content from referenced files ---' }, + { text: `\nContent from @${filePath}:\n` }, + { text: fileContent }, + { text: '\n--- End of content ---' }, + ], + shouldProceed: true, + }); + }, + ); + + it('should handle multiple @paths terminated by different punctuation', async () => { + const content1 = 'First file'; + const file1Path = await createTestFile( + path.join(testRootDir, 'first.txt'), + content1, + ); + const content2 = 'Second file'; + const file2Path = await createTestFile( + path.join(testRootDir, 'second.txt'), + content2, + ); + const query = `Compare @${file1Path}, @${file2Path}; what's different?`; + + const result = await handleAtCommand({ + query, + config: mockConfig, + addItem: mockAddItem, + onDebugMessage: mockOnDebugMessage, + messageId: 411, + signal: abortController.signal, + }); + + expect(result).toEqual({ + processedQuery: [ + { text: `Compare @${file1Path}, @${file2Path}; what's different?` }, + { text: '\n--- Content from referenced files ---' }, + { text: `\nContent from @${file1Path}:\n` }, + { text: content1 }, + { text: `\nContent from @${file2Path}:\n` }, + { text: content2 }, + { text: '\n--- End of content ---' }, + ], + shouldProceed: true, + }); + }); + + it('should still handle escaped spaces in paths before punctuation', async () => { + const fileContent = 'Spaced file content'; + const filePath = await createTestFile( + path.join(testRootDir, 'spaced file.txt'), + fileContent, + ); + const escapedPath = path.join(testRootDir, 'spaced\\ file.txt'); + const query = `Check @${escapedPath}, it has spaces.`; + + const result = await handleAtCommand({ + query, + config: mockConfig, + addItem: mockAddItem, + onDebugMessage: mockOnDebugMessage, + messageId: 412, + signal: abortController.signal, + }); + + expect(result).toEqual({ + processedQuery: [ + { text: `Check @${filePath}, it has spaces.` }, + { text: '\n--- Content from referenced files ---' }, + { text: `\nContent from @${filePath}:\n` }, + { text: fileContent }, + { text: '\n--- End of content ---' }, + ], + shouldProceed: true, + }); + }); + + it('should not break file paths with periods in extensions', async () => { + const fileContent = 'TypeScript content'; + const filePath = await createTestFile( + path.join(testRootDir, 'example.d.ts'), + fileContent, + ); + const query = `Analyze @${filePath} for type definitions.`; + + const result = await handleAtCommand({ + query, + config: mockConfig, + addItem: mockAddItem, + onDebugMessage: mockOnDebugMessage, + messageId: 413, + signal: abortController.signal, + }); + + expect(result).toEqual({ + processedQuery: [ + { text: `Analyze @${filePath} for type definitions.` }, + { text: '\n--- Content from referenced files ---' }, + { text: `\nContent from @${filePath}:\n` }, + { text: fileContent }, + { text: '\n--- End of content ---' }, + ], + shouldProceed: true, + }); + }); + + it('should handle file paths ending with period followed by space', async () => { + const fileContent = 'Config content'; + const filePath = await createTestFile( + path.join(testRootDir, 'config.json'), + fileContent, + ); + const query = `Check @${filePath}. This file contains settings.`; + + const result = await handleAtCommand({ + query, + config: mockConfig, + addItem: mockAddItem, + onDebugMessage: mockOnDebugMessage, + messageId: 414, + signal: abortController.signal, + }); + + expect(result).toEqual({ + processedQuery: [ + { text: `Check @${filePath}. This file contains settings.` }, + { text: '\n--- Content from referenced files ---' }, + { text: `\nContent from @${filePath}:\n` }, + { text: fileContent }, + { text: '\n--- End of content ---' }, + ], + shouldProceed: true, + }); + }); + + it('should handle comma termination with complex file paths', async () => { + const fileContent = 'Package info'; + const filePath = await createTestFile( + path.join(testRootDir, 'package.json'), + fileContent, + ); + const query = `Review @${filePath}, then check dependencies.`; + + const result = await handleAtCommand({ + query, + config: mockConfig, + addItem: mockAddItem, + onDebugMessage: mockOnDebugMessage, + messageId: 415, + signal: abortController.signal, + }); + + expect(result).toEqual({ + processedQuery: [ + { text: `Review @${filePath}, then check dependencies.` }, + { text: '\n--- Content from referenced files ---' }, + { text: `\nContent from @${filePath}:\n` }, + { text: fileContent }, + { text: '\n--- End of content ---' }, + ], + shouldProceed: true, + }); + }); + + it('should not terminate at period within file name', async () => { + const fileContent = 'Version info'; + const filePath = await createTestFile( + path.join(testRootDir, 'version.1.2.3.txt'), + fileContent, + ); + const query = `Check @${filePath} contains version information.`; + + const result = await handleAtCommand({ + query, + config: mockConfig, + addItem: mockAddItem, + onDebugMessage: mockOnDebugMessage, + messageId: 416, + signal: abortController.signal, + }); + + expect(result).toEqual({ + processedQuery: [ + { text: `Check @${filePath} contains version information.` }, + { text: '\n--- Content from referenced files ---' }, + { text: `\nContent from @${filePath}:\n` }, + { text: fileContent }, + { text: '\n--- End of content ---' }, + ], + shouldProceed: true, + }); + }); + + it('should handle end of string termination for period and comma', async () => { + const fileContent = 'End file content'; + const filePath = await createTestFile( + path.join(testRootDir, 'end.txt'), + fileContent, + ); + const query = `Show me @${filePath}.`; + + const result = await handleAtCommand({ + query, + config: mockConfig, + addItem: mockAddItem, + onDebugMessage: mockOnDebugMessage, + messageId: 417, + signal: abortController.signal, + }); + + expect(result).toEqual({ + processedQuery: [ + { text: `Show me @${filePath}.` }, + { text: '\n--- Content from referenced files ---' }, + { text: `\nContent from @${filePath}:\n` }, + { text: fileContent }, + { text: '\n--- End of content ---' }, + ], + shouldProceed: true, + }); + }); + + it('should handle files with special characters in names', async () => { + const fileContent = 'File with special chars content'; + const filePath = await createTestFile( + path.join(testRootDir, 'file$with&special#chars.txt'), + fileContent, + ); + const query = `Check @${filePath} for content.`; + + const result = await handleAtCommand({ + query, + config: mockConfig, + addItem: mockAddItem, + onDebugMessage: mockOnDebugMessage, + messageId: 418, + signal: abortController.signal, + }); + + expect(result).toEqual({ + processedQuery: [ + { text: `Check @${filePath} for content.` }, + { text: '\n--- Content from referenced files ---' }, + { text: `\nContent from @${filePath}:\n` }, + { text: fileContent }, + { text: '\n--- End of content ---' }, + ], + shouldProceed: true, + }); + }); + + it('should handle basic file names without special characters', async () => { + const fileContent = 'Basic file content'; + const filePath = await createTestFile( + path.join(testRootDir, 'basicfile.txt'), + fileContent, + ); + const query = `Check @${filePath} please.`; + + const result = await handleAtCommand({ + query, + config: mockConfig, + addItem: mockAddItem, + onDebugMessage: mockOnDebugMessage, + messageId: 421, + signal: abortController.signal, + }); + + expect(result).toEqual({ + processedQuery: [ + { text: `Check @${filePath} please.` }, + { text: '\n--- Content from referenced files ---' }, + { text: `\nContent from @${filePath}:\n` }, + { text: fileContent }, + { text: '\n--- End of content ---' }, + ], + shouldProceed: true, + }); + }); + }); }); diff --git a/packages/cli/src/ui/hooks/atCommandProcessor.ts b/packages/cli/src/ui/hooks/atCommandProcessor.ts index 25d57699..2e704183 100644 --- a/packages/cli/src/ui/hooks/atCommandProcessor.ts +++ b/packages/cli/src/ui/hooks/atCommandProcessor.ts @@ -87,9 +87,17 @@ function parseAllAtCommands(query: string): AtCommandPart[] { inEscape = false; } else if (char === '\\') { inEscape = true; - } else if (/\s/.test(char)) { - // Path ends at first whitespace not escaped + } else if (/[,\s;!?()[\]{}]/.test(char)) { + // Path ends at first whitespace or punctuation not escaped break; + } else if (char === '.') { + // For . we need to be more careful - only terminate if followed by whitespace or end of string + // This allows file extensions like .txt, .js but terminates at sentence endings like "file.txt. Next sentence" + const nextChar = + pathEndIndex + 1 < query.length ? query[pathEndIndex + 1] : ''; + if (nextChar === '' || /\s/.test(nextChar)) { + break; + } } pathEndIndex++; } @@ -188,6 +196,14 @@ export async function handleAtCommand({ // Check if path should be ignored based on filtering options + const workspaceContext = config.getWorkspaceContext(); + if (!workspaceContext.isPathWithinWorkspace(pathName)) { + onDebugMessage( + `Path ${pathName} is not in the workspace and will be skipped.`, + ); + continue; + } + const gitIgnored = respectFileIgnore.respectGitIgnore && fileDiscovery.shouldIgnoreFile(pathName, { @@ -215,90 +231,88 @@ export async function handleAtCommand({ continue; } - let currentPathSpec = pathName; - let resolvedSuccessfully = false; - - try { - const absolutePath = path.resolve(config.getTargetDir(), pathName); - const stats = await fs.stat(absolutePath); - if (stats.isDirectory()) { - currentPathSpec = - pathName + (pathName.endsWith(path.sep) ? `**` : `/**`); - onDebugMessage( - `Path ${pathName} resolved to directory, using glob: ${currentPathSpec}`, - ); - } else { - onDebugMessage(`Path ${pathName} resolved to file: ${absolutePath}`); - } - resolvedSuccessfully = true; - } catch (error) { - if (isNodeError(error) && error.code === 'ENOENT') { - if (config.getEnableRecursiveFileSearch() && globTool) { + for (const dir of config.getWorkspaceContext().getDirectories()) { + let currentPathSpec = pathName; + let resolvedSuccessfully = false; + try { + const absolutePath = path.resolve(dir, pathName); + const stats = await fs.stat(absolutePath); + if (stats.isDirectory()) { + currentPathSpec = + pathName + (pathName.endsWith(path.sep) ? `**` : `/**`); onDebugMessage( - `Path ${pathName} not found directly, attempting glob search.`, + `Path ${pathName} resolved to directory, using glob: ${currentPathSpec}`, ); - try { - const globResult = await globTool.execute( - { - pattern: `**/*${pathName}*`, - path: config.getTargetDir(), - }, - signal, + } else { + onDebugMessage(`Path ${pathName} resolved to file: ${absolutePath}`); + } + resolvedSuccessfully = true; + } catch (error) { + if (isNodeError(error) && error.code === 'ENOENT') { + if (config.getEnableRecursiveFileSearch() && globTool) { + onDebugMessage( + `Path ${pathName} not found directly, attempting glob search.`, ); - if ( - globResult.llmContent && - typeof globResult.llmContent === 'string' && - !globResult.llmContent.startsWith('No files found') && - !globResult.llmContent.startsWith('Error:') - ) { - const lines = globResult.llmContent.split('\n'); - if (lines.length > 1 && lines[1]) { - const firstMatchAbsolute = lines[1].trim(); - currentPathSpec = path.relative( - config.getTargetDir(), - firstMatchAbsolute, - ); - onDebugMessage( - `Glob search for ${pathName} found ${firstMatchAbsolute}, using relative path: ${currentPathSpec}`, - ); - resolvedSuccessfully = true; + try { + const globResult = await globTool.execute( + { + pattern: `**/*${pathName}*`, + path: dir, + }, + signal, + ); + if ( + globResult.llmContent && + typeof globResult.llmContent === 'string' && + !globResult.llmContent.startsWith('No files found') && + !globResult.llmContent.startsWith('Error:') + ) { + const lines = globResult.llmContent.split('\n'); + if (lines.length > 1 && lines[1]) { + const firstMatchAbsolute = lines[1].trim(); + currentPathSpec = path.relative(dir, firstMatchAbsolute); + onDebugMessage( + `Glob search for ${pathName} found ${firstMatchAbsolute}, using relative path: ${currentPathSpec}`, + ); + resolvedSuccessfully = true; + } else { + onDebugMessage( + `Glob search for '**/*${pathName}*' did not return a usable path. Path ${pathName} will be skipped.`, + ); + } } else { onDebugMessage( - `Glob search for '**/*${pathName}*' did not return a usable path. Path ${pathName} will be skipped.`, + `Glob search for '**/*${pathName}*' found no files or an error. Path ${pathName} will be skipped.`, ); } - } else { + } catch (globError) { + console.error( + `Error during glob search for ${pathName}: ${getErrorMessage(globError)}`, + ); onDebugMessage( - `Glob search for '**/*${pathName}*' found no files or an error. Path ${pathName} will be skipped.`, + `Error during glob search for ${pathName}. Path ${pathName} will be skipped.`, ); } - } catch (globError) { - console.error( - `Error during glob search for ${pathName}: ${getErrorMessage(globError)}`, - ); + } else { onDebugMessage( - `Error during glob search for ${pathName}. Path ${pathName} will be skipped.`, + `Glob tool not found. Path ${pathName} will be skipped.`, ); } } else { + console.error( + `Error stating path ${pathName}: ${getErrorMessage(error)}`, + ); onDebugMessage( - `Glob tool not found. Path ${pathName} will be skipped.`, + `Error stating path ${pathName}. Path ${pathName} will be skipped.`, ); } - } else { - console.error( - `Error stating path ${pathName}: ${getErrorMessage(error)}`, - ); - onDebugMessage( - `Error stating path ${pathName}. Path ${pathName} will be skipped.`, - ); } - } - - if (resolvedSuccessfully) { - pathSpecsToRead.push(currentPathSpec); - atPathToResolvedSpecMap.set(originalAtPath, currentPathSpec); - contentLabelsForDisplay.push(pathName); + if (resolvedSuccessfully) { + pathSpecsToRead.push(currentPathSpec); + atPathToResolvedSpecMap.set(originalAtPath, currentPathSpec); + contentLabelsForDisplay.push(pathName); + break; + } } } @@ -314,8 +328,7 @@ export async function handleAtCommand({ if ( i > 0 && initialQueryText.length > 0 && - !initialQueryText.endsWith(' ') && - resolvedSpec + !initialQueryText.endsWith(' ') ) { // Add space if previous part was text and didn't end with space, or if previous was @path const prevPart = commandParts[i - 1]; diff --git a/packages/cli/src/ui/hooks/slashCommandProcessor.test.ts b/packages/cli/src/ui/hooks/slashCommandProcessor.test.ts index 99e40fd7..dc32f4e9 100644 --- a/packages/cli/src/ui/hooks/slashCommandProcessor.test.ts +++ b/packages/cli/src/ui/hooks/slashCommandProcessor.test.ts @@ -4,15 +4,36 @@ * SPDX-License-Identifier: Apache-2.0 */ +const { logSlashCommand, SlashCommandEvent } = vi.hoisted(() => ({ + logSlashCommand: vi.fn(), + SlashCommandEvent: vi.fn((command, subCommand) => ({ command, subCommand })), +})); + +vi.mock('@qwen-code/qwen-code-core', async (importOriginal) => { + const original = + await importOriginal(); + return { + ...original, + logSlashCommand, + SlashCommandEvent, + getIdeInstaller: vi.fn().mockReturnValue(null), + }; +}); + const { mockProcessExit } = vi.hoisted(() => ({ mockProcessExit: vi.fn((_code?: number): never => undefined as never), })); -vi.mock('node:process', () => ({ - default: { +vi.mock('node:process', () => { + const mockProcess = { exit: mockProcessExit, - }, -})); + platform: 'test-platform', + }; + return { + ...mockProcess, + default: mockProcess, + }; +}); const mockBuiltinLoadCommands = vi.fn(); vi.mock('../../services/BuiltinCommandLoader.js', () => ({ @@ -69,16 +90,18 @@ describe('useSlashCommandProcessor', () => { const mockAddItem = vi.fn(); const mockClearItems = vi.fn(); const mockLoadHistory = vi.fn(); - const mockSetShowHelp = vi.fn(); + const mockOpenThemeDialog = vi.fn(); const mockOpenAuthDialog = vi.fn(); const mockSetQuittingMessages = vi.fn(); const mockConfig = { - getProjectRoot: () => '/mock/cwd', - getSessionId: () => 'test-session', - getGeminiClient: () => ({ + getProjectRoot: vi.fn(() => '/mock/cwd'), + getSessionId: vi.fn(() => 'test-session'), + getGeminiClient: vi.fn(() => ({ setHistory: vi.fn().mockResolvedValue(undefined), - }), + })), + getExtensions: vi.fn(() => []), + getIdeMode: vi.fn(() => false), } as unknown as Config; const mockSettings = {} as LoadedSettings; @@ -109,9 +132,8 @@ describe('useSlashCommandProcessor', () => { mockClearItems, mockLoadHistory, vi.fn(), // refreshStatic - mockSetShowHelp, vi.fn(), // onDebugMessage - vi.fn(), // openThemeDialog + mockOpenThemeDialog, // openThemeDialog mockOpenAuthDialog, vi.fn(), // openEditorDialog vi.fn(), // toggleCorgiMode @@ -311,19 +333,19 @@ describe('useSlashCommandProcessor', () => { }); describe('Action Result Handling', () => { - it('should handle "dialog: help" action', async () => { + it('should handle "dialog: theme" action', async () => { const command = createTestCommand({ - name: 'helpcmd', - action: vi.fn().mockResolvedValue({ type: 'dialog', dialog: 'help' }), + name: 'themecmd', + action: vi.fn().mockResolvedValue({ type: 'dialog', dialog: 'theme' }), }); const result = setupProcessorHook([command]); await waitFor(() => expect(result.current.slashCommands).toHaveLength(1)); await act(async () => { - await result.current.handleSlashCommand('/helpcmd'); + await result.current.handleSlashCommand('/themecmd'); }); - expect(mockSetShowHelp).toHaveBeenCalledWith(true); + expect(mockOpenThemeDialog).toHaveBeenCalled(); }); it('should handle "load_history" action', async () => { @@ -796,15 +818,15 @@ describe('useSlashCommandProcessor', () => { mockClearItems, mockLoadHistory, vi.fn(), // refreshStatic - mockSetShowHelp, vi.fn(), // onDebugMessage vi.fn(), // openThemeDialog mockOpenAuthDialog, - vi.fn(), // openEditorDialog, + vi.fn(), // openEditorDialog vi.fn(), // toggleCorgiMode mockSetQuittingMessages, vi.fn(), // openPrivacyNotice - vi.fn(), // toggleVimEnabled + vi.fn().mockResolvedValue(false), // toggleVimEnabled + vi.fn(), // setIsProcessing ), ); @@ -813,4 +835,83 @@ describe('useSlashCommandProcessor', () => { expect(abortSpy).toHaveBeenCalledTimes(1); }); }); + + describe('Slash Command Logging', () => { + const mockCommandAction = vi.fn().mockResolvedValue({ type: 'handled' }); + const loggingTestCommands: SlashCommand[] = [ + createTestCommand({ + name: 'logtest', + action: mockCommandAction, + }), + createTestCommand({ + name: 'logwithsub', + subCommands: [ + createTestCommand({ + name: 'sub', + action: mockCommandAction, + }), + ], + }), + createTestCommand({ + name: 'logalias', + altNames: ['la'], + action: mockCommandAction, + }), + ]; + + beforeEach(() => { + mockCommandAction.mockClear(); + vi.mocked(logSlashCommand).mockClear(); + vi.mocked(SlashCommandEvent).mockClear(); + }); + + it('should log a simple slash command', async () => { + const result = setupProcessorHook(loggingTestCommands); + await waitFor(() => + expect(result.current.slashCommands.length).toBeGreaterThan(0), + ); + await act(async () => { + await result.current.handleSlashCommand('/logtest'); + }); + + expect(logSlashCommand).toHaveBeenCalledTimes(1); + expect(SlashCommandEvent).toHaveBeenCalledWith('logtest', undefined); + }); + + it('should log a slash command with a subcommand', async () => { + const result = setupProcessorHook(loggingTestCommands); + await waitFor(() => + expect(result.current.slashCommands.length).toBeGreaterThan(0), + ); + await act(async () => { + await result.current.handleSlashCommand('/logwithsub sub'); + }); + + expect(logSlashCommand).toHaveBeenCalledTimes(1); + expect(SlashCommandEvent).toHaveBeenCalledWith('logwithsub', 'sub'); + }); + + it('should log the command path when an alias is used', async () => { + const result = setupProcessorHook(loggingTestCommands); + await waitFor(() => + expect(result.current.slashCommands.length).toBeGreaterThan(0), + ); + await act(async () => { + await result.current.handleSlashCommand('/la'); + }); + expect(logSlashCommand).toHaveBeenCalledTimes(1); + expect(SlashCommandEvent).toHaveBeenCalledWith('logalias', undefined); + }); + + it('should not log for unknown commands', async () => { + const result = setupProcessorHook(loggingTestCommands); + await waitFor(() => + expect(result.current.slashCommands.length).toBeGreaterThan(0), + ); + await act(async () => { + await result.current.handleSlashCommand('/unknown'); + }); + expect(logSlashCommand).not.toHaveBeenCalled(); + }); + }); }); diff --git a/packages/cli/src/ui/hooks/slashCommandProcessor.ts b/packages/cli/src/ui/hooks/slashCommandProcessor.ts index 8500728e..13305ff4 100644 --- a/packages/cli/src/ui/hooks/slashCommandProcessor.ts +++ b/packages/cli/src/ui/hooks/slashCommandProcessor.ts @@ -13,6 +13,8 @@ import { Config, GitService, Logger, + logSlashCommand, + SlashCommandEvent, ToolConfirmationOutcome, } from '@qwen-code/qwen-code-core'; import { useSessionStats } from '../contexts/SessionContext.js'; @@ -40,7 +42,6 @@ export const useSlashCommandProcessor = ( clearItems: UseHistoryManagerReturn['clearItems'], loadHistory: UseHistoryManagerReturn['loadHistory'], refreshStatic: () => void, - setShowHelp: React.Dispatch>, onDebugMessage: (message: string) => void, openThemeDialog: () => void, openAuthDialog: () => void, @@ -103,6 +104,11 @@ export const useSlashCommandProcessor = ( selectedAuthType: message.selectedAuthType, gcpProject: message.gcpProject, }; + } else if (message.type === MessageType.HELP) { + historyItemContent = { + type: 'help', + timestamp: message.timestamp, + }; } else if (message.type === MessageType.STATS) { historyItemContent = { type: 'stats', @@ -136,7 +142,6 @@ export const useSlashCommandProcessor = ( }, [addItem], ); - const commandContext = useMemo( (): CommandContext => ({ services: { @@ -185,6 +190,8 @@ export const useSlashCommandProcessor = ( ], ); + const ideMode = config?.getIdeMode(); + useEffect(() => { const controller = new AbortController(); const load = async () => { @@ -205,7 +212,7 @@ export const useSlashCommandProcessor = ( return () => { controller.abort(); }; - }, [config]); + }, [config, ideMode]); const handleSlashCommand = useCallback( async ( @@ -235,6 +242,7 @@ export const useSlashCommandProcessor = ( let currentCommands = commands; let commandToExecute: SlashCommand | undefined; let pathIndex = 0; + const canonicalPath: string[] = []; for (const part of commandPath) { // TODO: For better performance and architectural clarity, this two-pass @@ -255,6 +263,7 @@ export const useSlashCommandProcessor = ( if (foundCommand) { commandToExecute = foundCommand; + canonicalPath.push(foundCommand.name); pathIndex++; if (foundCommand.subCommands) { currentCommands = foundCommand.subCommands; @@ -270,6 +279,17 @@ export const useSlashCommandProcessor = ( const args = parts.slice(pathIndex).join(' '); if (commandToExecute.action) { + if (config) { + const resolvedCommandPath = canonicalPath; + const event = new SlashCommandEvent( + resolvedCommandPath[0], + resolvedCommandPath.length > 1 + ? resolvedCommandPath.slice(1).join(' ') + : undefined, + ); + logSlashCommand(config, event); + } + const fullCommandContext: CommandContext = { ...commandContext, invocation: { @@ -318,9 +338,6 @@ export const useSlashCommandProcessor = ( return { type: 'handled' }; case 'dialog': switch (result.dialog) { - case 'help': - setShowHelp(true); - return { type: 'handled' }; case 'auth': openAuthDialog(); return { type: 'handled' }; @@ -447,7 +464,6 @@ export const useSlashCommandProcessor = ( [ config, addItem, - setShowHelp, openAuthDialog, commands, commandContext, diff --git a/packages/cli/src/ui/hooks/useCompletion.test.ts b/packages/cli/src/ui/hooks/useCommandCompletion.test.ts similarity index 71% rename from packages/cli/src/ui/hooks/useCompletion.test.ts rename to packages/cli/src/ui/hooks/useCommandCompletion.test.ts index d12f185b..1f6d9a06 100644 --- a/packages/cli/src/ui/hooks/useCompletion.test.ts +++ b/packages/cli/src/ui/hooks/useCommandCompletion.test.ts @@ -7,21 +7,22 @@ /** @vitest-environment jsdom */ import { describe, it, expect, beforeEach, vi, afterEach } from 'vitest'; -import { renderHook, act } from '@testing-library/react'; -import { useCompletion } from './useCompletion.js'; +import { renderHook, act, waitFor } from '@testing-library/react'; +import { useCommandCompletion } from './useCommandCompletion.js'; import * as fs from 'fs/promises'; import * as path from 'path'; import * as os from 'os'; import { CommandContext, SlashCommand } from '../commands/types.js'; import { Config, FileDiscoveryService } from '@qwen-code/qwen-code-core'; -import { useTextBuffer, TextBuffer } from '../components/shared/text-buffer.js'; +import { useTextBuffer } from '../components/shared/text-buffer.js'; -describe('useCompletion', () => { +describe('useCommandCompletion', () => { let testRootDir: string; let mockConfig: Config; // A minimal mock is sufficient for these tests. const mockCommandContext = {} as CommandContext; + let testDirs: string[]; async function createEmptyDir(...pathSegments: string[]) { const fullPath = path.join(testRootDir, ...pathSegments); @@ -37,10 +38,10 @@ describe('useCompletion', () => { } // Helper to create real TextBuffer objects within renderHook - function useTextBufferForTest(text: string) { + function useTextBufferForTest(text: string, cursorOffset?: number) { return useTextBuffer({ initialText: text, - initialCursorOffset: text.length, + initialCursorOffset: cursorOffset ?? text.length, viewport: { width: 80, height: 20 }, isValidPath: () => false, onChange: () => {}, @@ -49,10 +50,14 @@ describe('useCompletion', () => { beforeEach(async () => { testRootDir = await fs.mkdtemp( - path.join(os.tmpdir(), 'completion-unit-test-'), + path.join(os.tmpdir(), 'slash-completion-unit-test-'), ); + testDirs = [testRootDir]; mockConfig = { getTargetDir: () => testRootDir, + getWorkspaceContext: () => ({ + getDirectories: () => testDirs, + }), getProjectRoot: () => testRootDir, getFileFilteringOptions: vi.fn(() => ({ respectGitIgnore: true, @@ -77,11 +82,13 @@ describe('useCompletion', () => { { name: 'dummy', description: 'dummy' }, ] as unknown as SlashCommand[]; const { result } = renderHook(() => - useCompletion( + useCommandCompletion( useTextBufferForTest(''), + testDirs, testRootDir, slashCommands, mockCommandContext, + false, mockConfig, ), ); @@ -106,11 +113,13 @@ describe('useCompletion', () => { const { result, rerender } = renderHook( ({ text }) => { const textBuffer = useTextBufferForTest(text); - return useCompletion( + return useCommandCompletion( textBuffer, + testDirs, testRootDir, slashCommands, mockCommandContext, + false, mockConfig, ); }, @@ -127,7 +136,7 @@ describe('useCompletion', () => { expect(result.current.isLoadingSuggestions).toBe(false); }); - it('should reset all state to default values', () => { + it('should reset all state to default values', async () => { const slashCommands = [ { name: 'help', @@ -136,11 +145,13 @@ describe('useCompletion', () => { ] as unknown as SlashCommand[]; const { result } = renderHook(() => - useCompletion( + useCommandCompletion( useTextBufferForTest('/help'), + testDirs, testRootDir, slashCommands, mockCommandContext, + false, mockConfig, ), ); @@ -154,6 +165,11 @@ describe('useCompletion', () => { result.current.resetCompletionState(); }); + // Wait for async suggestions clearing + await waitFor(() => { + expect(result.current.suggestions).toEqual([]); + }); + expect(result.current.suggestions).toEqual([]); expect(result.current.activeSuggestionIndex).toBe(-1); expect(result.current.visibleStartIndex).toBe(0); @@ -168,11 +184,13 @@ describe('useCompletion', () => { { name: 'dummy', description: 'dummy' }, ] as unknown as SlashCommand[]; const { result } = renderHook(() => - useCompletion( + useCommandCompletion( useTextBufferForTest(''), + testDirs, testRootDir, slashCommands, mockCommandContext, + false, mockConfig, ), ); @@ -189,11 +207,14 @@ describe('useCompletion', () => { { name: 'dummy', description: 'dummy' }, ] as unknown as SlashCommand[]; const { result } = renderHook(() => - useCompletion( + useCommandCompletion( useTextBufferForTest(''), + testDirs, testRootDir, slashCommands, mockCommandContext, + false, + mockConfig, ), ); @@ -213,11 +234,14 @@ describe('useCompletion', () => { }, ] as unknown as SlashCommand[]; const { result } = renderHook(() => - useCompletion( + useCommandCompletion( useTextBufferForTest('/h'), + testDirs, testRootDir, slashCommands, mockCommandContext, + false, + mockConfig, ), ); @@ -240,11 +264,14 @@ describe('useCompletion', () => { }, ] as unknown as SlashCommand[]; const { result } = renderHook(() => - useCompletion( + useCommandCompletion( useTextBufferForTest('/h'), + testDirs, testRootDir, slashCommands, mockCommandContext, + false, + mockConfig, ), ); @@ -268,11 +295,14 @@ describe('useCompletion', () => { { name: 'chat', description: 'Manage chat' }, ] as unknown as SlashCommand[]; const { result } = renderHook(() => - useCompletion( + useCommandCompletion( useTextBufferForTest('/'), + testDirs, testRootDir, slashCommands, mockCommandContext, + false, + mockConfig, ), ); @@ -313,11 +343,14 @@ describe('useCompletion', () => { })) as unknown as SlashCommand[]; const { result } = renderHook(() => - useCompletion( + useCommandCompletion( useTextBufferForTest('/command'), + testDirs, testRootDir, largeMockCommands, mockCommandContext, + false, + mockConfig, ), ); @@ -370,8 +403,9 @@ describe('useCompletion', () => { }, ] as unknown as SlashCommand[]; const { result } = renderHook(() => - useCompletion( + useCommandCompletion( useTextBufferForTest('/'), + testDirs, testRootDir, slashCommands, mockCommandContext, @@ -392,8 +426,9 @@ describe('useCompletion', () => { }, ] as unknown as SlashCommand[]; const { result } = renderHook(() => - useCompletion( + useCommandCompletion( useTextBufferForTest('/mem'), + testDirs, testRootDir, slashCommands, mockCommandContext, @@ -415,8 +450,9 @@ describe('useCompletion', () => { }, ] as unknown as SlashCommand[]; const { result } = renderHook(() => - useCompletion( + useCommandCompletion( useTextBufferForTest('/usag'), // part of the word "usage" + testDirs, testRootDir, slashCommands, mockCommandContext, @@ -441,8 +477,9 @@ describe('useCompletion', () => { }, ] as unknown as SlashCommand[]; const { result } = renderHook(() => - useCompletion( + useCommandCompletion( useTextBufferForTest('/clear'), // No trailing space + testDirs, testRootDir, slashCommands, mockCommandContext, @@ -472,8 +509,9 @@ describe('useCompletion', () => { ] as unknown as SlashCommand[]; const { result } = renderHook(() => - useCompletion( + useCommandCompletion( useTextBufferForTest(query), + testDirs, testRootDir, mockSlashCommands, mockCommandContext, @@ -492,8 +530,9 @@ describe('useCompletion', () => { }, ] as unknown as SlashCommand[]; const { result } = renderHook(() => - useCompletion( + useCommandCompletion( useTextBufferForTest('/clear '), + testDirs, testRootDir, slashCommands, mockCommandContext, @@ -512,8 +551,9 @@ describe('useCompletion', () => { }, ] as unknown as SlashCommand[]; const { result } = renderHook(() => - useCompletion( + useCommandCompletion( useTextBufferForTest('/unknown-command'), + testDirs, testRootDir, slashCommands, mockCommandContext, @@ -545,8 +585,9 @@ describe('useCompletion', () => { ] as unknown as SlashCommand[]; const { result } = renderHook(() => - useCompletion( + useCommandCompletion( useTextBufferForTest('/memory'), // Note: no trailing space + testDirs, testRootDir, slashCommands, mockCommandContext, @@ -582,8 +623,9 @@ describe('useCompletion', () => { }, ] as unknown as SlashCommand[]; const { result } = renderHook(() => - useCompletion( + useCommandCompletion( useTextBufferForTest('/memory'), + testDirs, testRootDir, slashCommands, mockCommandContext, @@ -617,8 +659,9 @@ describe('useCompletion', () => { }, ] as unknown as SlashCommand[]; const { result } = renderHook(() => - useCompletion( + useCommandCompletion( useTextBufferForTest('/memory a'), + testDirs, testRootDir, slashCommands, mockCommandContext, @@ -648,8 +691,9 @@ describe('useCompletion', () => { }, ] as unknown as SlashCommand[]; const { result } = renderHook(() => - useCompletion( + useCommandCompletion( useTextBufferForTest('/memory dothisnow'), + testDirs, testRootDir, slashCommands, mockCommandContext, @@ -690,8 +734,9 @@ describe('useCompletion', () => { ] as unknown as SlashCommand[]; const { result } = renderHook(() => - useCompletion( + useCommandCompletion( useTextBufferForTest('/chat resume my-ch'), + testDirs, testRootDir, slashCommands, mockCommandContext, @@ -733,8 +778,9 @@ describe('useCompletion', () => { ] as unknown as SlashCommand[]; const { result } = renderHook(() => - useCompletion( + useCommandCompletion( useTextBufferForTest('/chat resume '), + testDirs, testRootDir, slashCommands, mockCommandContext, @@ -767,11 +813,14 @@ describe('useCompletion', () => { ] as unknown as SlashCommand[]; const { result } = renderHook(() => - useCompletion( + useCommandCompletion( useTextBufferForTest('/chat resume '), + testDirs, testRootDir, slashCommands, mockCommandContext, + false, + mockConfig, ), ); @@ -794,11 +843,14 @@ describe('useCompletion', () => { await createTestFile('', 'README.md'); const { result } = renderHook(() => - useCompletion( + useCommandCompletion( useTextBufferForTest('@s'), + testDirs, testRootDir, [], mockCommandContext, + false, + mockConfig, ), ); @@ -827,11 +879,14 @@ describe('useCompletion', () => { await createTestFile('', 'src', 'index.ts'); const { result } = renderHook(() => - useCompletion( + useCommandCompletion( useTextBufferForTest('@src/comp'), + testDirs, testRootDir, [], mockCommandContext, + false, + mockConfig, ), ); @@ -852,11 +907,14 @@ describe('useCompletion', () => { await createTestFile('', 'src', 'index.ts'); const { result } = renderHook(() => - useCompletion( + useCommandCompletion( useTextBufferForTest('@.'), + testDirs, testRootDir, [], mockCommandContext, + false, + mockConfig, ), ); @@ -883,11 +941,14 @@ describe('useCompletion', () => { await createEmptyDir('dist'); const { result } = renderHook(() => - useCompletion( + useCommandCompletion( useTextBufferForTest('@d'), + testDirs, testRootDir, [], mockCommandContext, + false, + mockConfigNoRecursive, ), ); @@ -908,8 +969,9 @@ describe('useCompletion', () => { await createTestFile('', 'README.md'); const { result } = renderHook(() => - useCompletion( + useCommandCompletion( useTextBufferForTest('@'), + testDirs, testRootDir, [], mockCommandContext, @@ -942,11 +1004,14 @@ describe('useCompletion', () => { .mockImplementation(() => {}); const { result } = renderHook(() => - useCompletion( + useCommandCompletion( useTextBufferForTest('@'), + testDirs, testRootDir, [], mockCommandContext, + false, + mockConfig, ), ); @@ -972,11 +1037,14 @@ describe('useCompletion', () => { await createEmptyDir('data'); const { result } = renderHook(() => - useCompletion( + useCommandCompletion( useTextBufferForTest('@d'), + testDirs, testRootDir, [], mockCommandContext, + false, + mockConfig, ), ); @@ -1005,11 +1073,14 @@ describe('useCompletion', () => { await createTestFile('', 'README.md'); const { result } = renderHook(() => - useCompletion( + useCommandCompletion( useTextBufferForTest('@'), + testDirs, testRootDir, [], mockCommandContext, + false, + mockConfig, ), ); @@ -1037,11 +1108,14 @@ describe('useCompletion', () => { await createTestFile('', 'temp', 'temp.log'); const { result } = renderHook(() => - useCompletion( + useCommandCompletion( useTextBufferForTest('@t'), + testDirs, testRootDir, [], mockCommandContext, + false, + mockConfig, ), ); @@ -1076,21 +1150,21 @@ describe('useCompletion', () => { ], }, ] as unknown as SlashCommand[]; - // Create a mock buffer that we can spy on directly - const mockBuffer = { - text: '/mem', - setText: vi.fn(), - } as unknown as TextBuffer; - const { result } = renderHook(() => - useCompletion( - mockBuffer, + const { result } = renderHook(() => { + const textBuffer = useTextBufferForTest('/mem'); + const completion = useCommandCompletion( + textBuffer, + testDirs, testRootDir, slashCommands, mockCommandContext, + false, + mockConfig, - ), - ); + ); + return { ...completion, textBuffer }; + }); expect(result.current.suggestions.map((s) => s.value)).toEqual([ 'memory', @@ -1100,14 +1174,10 @@ describe('useCompletion', () => { result.current.handleAutocomplete(0); }); - expect(mockBuffer.setText).toHaveBeenCalledWith('/memory '); + expect(result.current.textBuffer.text).toBe('/memory '); }); it('should append a sub-command when the parent is complete', () => { - const mockBuffer = { - text: '/memory', - setText: vi.fn(), - } as unknown as TextBuffer; const slashCommands = [ { name: 'memory', @@ -1125,15 +1195,20 @@ describe('useCompletion', () => { }, ] as unknown as SlashCommand[]; - const { result } = renderHook(() => - useCompletion( - mockBuffer, + const { result } = renderHook(() => { + const textBuffer = useTextBufferForTest('/memory'); + const completion = useCommandCompletion( + textBuffer, + testDirs, testRootDir, slashCommands, mockCommandContext, + false, + mockConfig, - ), - ); + ); + return { ...completion, textBuffer }; + }); // Suggestions are populated by useEffect expect(result.current.suggestions.map((s) => s.value)).toEqual([ @@ -1145,14 +1220,10 @@ describe('useCompletion', () => { result.current.handleAutocomplete(1); // index 1 is 'add' }); - expect(mockBuffer.setText).toHaveBeenCalledWith('/memory add '); + expect(result.current.textBuffer.text).toBe('/memory add '); }); it('should complete a command with an alternative name', () => { - const mockBuffer = { - text: '/?', - setText: vi.fn(), - } as unknown as TextBuffer; const slashCommands = [ { name: 'memory', @@ -1170,15 +1241,20 @@ describe('useCompletion', () => { }, ] as unknown as SlashCommand[]; - const { result } = renderHook(() => - useCompletion( - mockBuffer, + const { result } = renderHook(() => { + const textBuffer = useTextBufferForTest('/?'); + const completion = useCommandCompletion( + textBuffer, + testDirs, testRootDir, slashCommands, mockCommandContext, + false, + mockConfig, - ), - ); + ); + return { ...completion, textBuffer }; + }); result.current.suggestions.push({ label: 'help', @@ -1190,43 +1266,23 @@ describe('useCompletion', () => { result.current.handleAutocomplete(0); }); - expect(mockBuffer.setText).toHaveBeenCalledWith('/help '); + expect(result.current.textBuffer.text).toBe('/help '); }); - it('should complete a file path', async () => { - const mockBuffer = { - text: '@src/fi', - lines: ['@src/fi'], - cursor: [0, 7], - setText: vi.fn(), - replaceRangeByOffset: vi.fn(), - } as unknown as TextBuffer; - const slashCommands = [ - { - name: 'memory', - description: 'Manage memory', - subCommands: [ - { - name: 'show', - description: 'Show memory', - }, - { - name: 'add', - description: 'Add to memory', - }, - ], - }, - ] as unknown as SlashCommand[]; - - const { result } = renderHook(() => - useCompletion( - mockBuffer, + it('should complete a file path', () => { + const { result } = renderHook(() => { + const textBuffer = useTextBufferForTest('@src/fi'); + const completion = useCommandCompletion( + textBuffer, + testDirs, testRootDir, - slashCommands, + [], mockCommandContext, + false, mockConfig, - ), - ); + ); + return { ...completion, textBuffer }; + }); result.current.suggestions.push({ label: 'file1.txt', @@ -1237,11 +1293,324 @@ describe('useCompletion', () => { result.current.handleAutocomplete(0); }); - expect(mockBuffer.replaceRangeByOffset).toHaveBeenCalledWith( - 5, // after '@src/' - mockBuffer.text.length, - 'file1.txt', + expect(result.current.textBuffer.text).toBe('@src/file1.txt '); + }); + + it('should complete a file path when cursor is not at the end of the line', () => { + const text = '@src/fi le.txt'; + const cursorOffset = 7; // after "i" + + const { result } = renderHook(() => { + const textBuffer = useTextBufferForTest(text, cursorOffset); + const completion = useCommandCompletion( + textBuffer, + testDirs, + testRootDir, + [], + mockCommandContext, + false, + mockConfig, + ); + return { ...completion, textBuffer }; + }); + + result.current.suggestions.push({ + label: 'file1.txt', + value: 'file1.txt', + }); + + act(() => { + result.current.handleAutocomplete(0); + }); + + expect(result.current.textBuffer.text).toBe('@src/file1.txt le.txt'); + }); + + it('should complete the correct file path with multiple @-commands', () => { + const text = '@file1.txt @src/fi'; + + const { result } = renderHook(() => { + const textBuffer = useTextBufferForTest(text); + const completion = useCommandCompletion( + textBuffer, + testDirs, + testRootDir, + [], + mockCommandContext, + false, + mockConfig, + ); + return { ...completion, textBuffer }; + }); + + result.current.suggestions.push({ + label: 'file2.txt', + value: 'file2.txt', + }); + + act(() => { + result.current.handleAutocomplete(0); + }); + + expect(result.current.textBuffer.text).toBe('@file1.txt @src/file2.txt '); + }); + }); + + describe('File Path Escaping', () => { + it('should escape special characters in file names', async () => { + await createTestFile('', 'my file.txt'); + await createTestFile('', 'file(1).txt'); + await createTestFile('', 'backup[old].txt'); + + const { result } = renderHook(() => + useCommandCompletion( + useTextBufferForTest('@my'), + testDirs, + testRootDir, + [], + mockCommandContext, + false, + mockConfig, + ), ); + + await act(async () => { + await new Promise((resolve) => setTimeout(resolve, 150)); + }); + + const suggestion = result.current.suggestions.find( + (s) => s.label === 'my file.txt', + ); + expect(suggestion).toBeDefined(); + expect(suggestion!.value).toBe('my\\ file.txt'); + }); + + it('should escape parentheses in file names', async () => { + await createTestFile('', 'document(final).docx'); + await createTestFile('', 'script(v2).sh'); + + const { result } = renderHook(() => + useCommandCompletion( + useTextBufferForTest('@doc'), + testDirs, + testRootDir, + [], + mockCommandContext, + false, + mockConfig, + ), + ); + + await act(async () => { + await new Promise((resolve) => setTimeout(resolve, 150)); + }); + + const suggestion = result.current.suggestions.find( + (s) => s.label === 'document(final).docx', + ); + expect(suggestion).toBeDefined(); + expect(suggestion!.value).toBe('document\\(final\\).docx'); + }); + + it('should escape square brackets in file names', async () => { + await createTestFile('', 'backup[2024-01-01].zip'); + await createTestFile('', 'config[dev].json'); + + const { result } = renderHook(() => + useCommandCompletion( + useTextBufferForTest('@backup'), + testDirs, + testRootDir, + [], + mockCommandContext, + false, + mockConfig, + ), + ); + + await act(async () => { + await new Promise((resolve) => setTimeout(resolve, 150)); + }); + + const suggestion = result.current.suggestions.find( + (s) => s.label === 'backup[2024-01-01].zip', + ); + expect(suggestion).toBeDefined(); + expect(suggestion!.value).toBe('backup\\[2024-01-01\\].zip'); + }); + + it('should escape multiple special characters in file names', async () => { + await createTestFile('', 'my file (backup) [v1.2].txt'); + await createTestFile('', 'data & config {prod}.json'); + + const { result } = renderHook(() => + useCommandCompletion( + useTextBufferForTest('@my'), + testDirs, + testRootDir, + [], + mockCommandContext, + false, + mockConfig, + ), + ); + + await act(async () => { + await new Promise((resolve) => setTimeout(resolve, 150)); + }); + + const suggestion = result.current.suggestions.find( + (s) => s.label === 'my file (backup) [v1.2].txt', + ); + expect(suggestion).toBeDefined(); + expect(suggestion!.value).toBe( + 'my\\ file\\ \\(backup\\)\\ \\[v1.2\\].txt', + ); + }); + + it('should preserve path separators while escaping special characters', async () => { + await createTestFile( + '', + 'projects', + 'my project (2024)', + 'file with spaces.txt', + ); + + const { result } = renderHook(() => + useCommandCompletion( + useTextBufferForTest('@projects/my'), + testDirs, + testRootDir, + [], + mockCommandContext, + false, + mockConfig, + ), + ); + + await act(async () => { + await new Promise((resolve) => setTimeout(resolve, 150)); + }); + + const suggestion = result.current.suggestions.find((s) => + s.label.includes('my project'), + ); + expect(suggestion).toBeDefined(); + // Should escape spaces and parentheses but preserve forward slashes + expect(suggestion!.value).toMatch(/my\\ project\\ \\\(2024\\\)/); + expect(suggestion!.value).toContain('/'); // Should contain forward slash for path separator + }); + + it('should normalize Windows path separators to forward slashes while preserving escaping', async () => { + // Create test with complex nested structure + await createTestFile( + '', + 'deep', + 'nested', + 'special folder', + 'file with (parentheses).txt', + ); + + const { result } = renderHook(() => + useCommandCompletion( + useTextBufferForTest('@deep/nested/special'), + testDirs, + testRootDir, + [], + mockCommandContext, + false, + mockConfig, + ), + ); + + await act(async () => { + await new Promise((resolve) => setTimeout(resolve, 150)); + }); + + const suggestion = result.current.suggestions.find((s) => + s.label.includes('special folder'), + ); + expect(suggestion).toBeDefined(); + // Should use forward slashes for path separators and escape spaces + expect(suggestion!.value).toContain('special\\ folder/'); + expect(suggestion!.value).not.toContain('\\\\'); // Should not contain double backslashes for path separators + }); + + it('should handle directory names with special characters', async () => { + await createEmptyDir('my documents (personal)'); + await createEmptyDir('config [production]'); + await createEmptyDir('data & logs'); + + const { result } = renderHook(() => + useCommandCompletion( + useTextBufferForTest('@'), + testDirs, + testRootDir, + [], + mockCommandContext, + false, + mockConfig, + ), + ); + + await act(async () => { + await new Promise((resolve) => setTimeout(resolve, 150)); + }); + + const suggestions = result.current.suggestions; + + const docSuggestion = suggestions.find( + (s) => s.label === 'my documents (personal)/', + ); + expect(docSuggestion).toBeDefined(); + expect(docSuggestion!.value).toBe('my\\ documents\\ \\(personal\\)/'); + + const configSuggestion = suggestions.find( + (s) => s.label === 'config [production]/', + ); + expect(configSuggestion).toBeDefined(); + expect(configSuggestion!.value).toBe('config\\ \\[production\\]/'); + + const dataSuggestion = suggestions.find( + (s) => s.label === 'data & logs/', + ); + expect(dataSuggestion).toBeDefined(); + expect(dataSuggestion!.value).toBe('data\\ \\&\\ logs/'); + }); + + it('should handle files with various shell metacharacters', async () => { + await createTestFile('', 'file$var.txt'); + await createTestFile('', 'important!.md'); + + const { result } = renderHook(() => + useCommandCompletion( + useTextBufferForTest('@'), + testDirs, + testRootDir, + [], + mockCommandContext, + false, + mockConfig, + ), + ); + + await act(async () => { + await new Promise((resolve) => setTimeout(resolve, 150)); + }); + + const suggestions = result.current.suggestions; + + const dollarSuggestion = suggestions.find( + (s) => s.label === 'file$var.txt', + ); + expect(dollarSuggestion).toBeDefined(); + expect(dollarSuggestion!.value).toBe('file\\$var.txt'); + + const importantSuggestion = suggestions.find( + (s) => s.label === 'important!.md', + ); + expect(importantSuggestion).toBeDefined(); + expect(importantSuggestion!.value).toBe('important\\!.md'); }); }); }); diff --git a/packages/cli/src/ui/hooks/useCommandCompletion.tsx b/packages/cli/src/ui/hooks/useCommandCompletion.tsx new file mode 100644 index 00000000..7f4640e7 --- /dev/null +++ b/packages/cli/src/ui/hooks/useCommandCompletion.tsx @@ -0,0 +1,661 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import { useEffect, useCallback, useMemo, useRef } from 'react'; +import * as fs from 'fs/promises'; +import * as path from 'path'; +import { glob } from 'glob'; +import { + isNodeError, + escapePath, + unescapePath, + getErrorMessage, + Config, + FileDiscoveryService, + DEFAULT_FILE_FILTERING_OPTIONS, + SHELL_SPECIAL_CHARS, +} from '@qwen-code/qwen-code-core'; +import { Suggestion } from '../components/SuggestionsDisplay.js'; +import { CommandContext, SlashCommand } from '../commands/types.js'; +import { + logicalPosToOffset, + TextBuffer, +} from '../components/shared/text-buffer.js'; +import { isSlashCommand } from '../utils/commandUtils.js'; +import { toCodePoints } from '../utils/textUtils.js'; +import { useCompletion } from './useCompletion.js'; + +export interface UseCommandCompletionReturn { + suggestions: Suggestion[]; + activeSuggestionIndex: number; + visibleStartIndex: number; + showSuggestions: boolean; + isLoadingSuggestions: boolean; + isPerfectMatch: boolean; + setActiveSuggestionIndex: React.Dispatch>; + setShowSuggestions: React.Dispatch>; + resetCompletionState: () => void; + navigateUp: () => void; + navigateDown: () => void; + handleAutocomplete: (indexToUse: number) => void; +} + +export function useCommandCompletion( + buffer: TextBuffer, + dirs: readonly string[], + cwd: string, + slashCommands: readonly SlashCommand[], + commandContext: CommandContext, + reverseSearchActive: boolean = false, + config?: Config, +): UseCommandCompletionReturn { + const { + suggestions, + activeSuggestionIndex, + visibleStartIndex, + showSuggestions, + isLoadingSuggestions, + isPerfectMatch, + + setSuggestions, + setShowSuggestions, + setActiveSuggestionIndex, + setIsLoadingSuggestions, + setIsPerfectMatch, + setVisibleStartIndex, + + resetCompletionState, + navigateUp, + navigateDown, + } = useCompletion(); + + const completionStart = useRef(-1); + const completionEnd = useRef(-1); + + const cursorRow = buffer.cursor[0]; + const cursorCol = buffer.cursor[1]; + + // Check if cursor is after @ or / without unescaped spaces + const commandIndex = useMemo(() => { + const currentLine = buffer.lines[cursorRow] || ''; + if (cursorRow === 0 && isSlashCommand(currentLine.trim())) { + return currentLine.indexOf('/'); + } + + // For other completions like '@', we search backwards from the cursor. + + const codePoints = toCodePoints(currentLine); + for (let i = cursorCol - 1; i >= 0; i--) { + const char = codePoints[i]; + + if (char === ' ') { + // Check for unescaped spaces. + let backslashCount = 0; + for (let j = i - 1; j >= 0 && codePoints[j] === '\\'; j--) { + backslashCount++; + } + if (backslashCount % 2 === 0) { + return -1; // Inactive on unescaped space. + } + } else if (char === '@') { + // Active if we find an '@' before any unescaped space. + return i; + } + } + + return -1; + }, [cursorRow, cursorCol, buffer.lines]); + + useEffect(() => { + if (commandIndex === -1 || reverseSearchActive) { + setTimeout(resetCompletionState, 0); + return; + } + + const currentLine = buffer.lines[cursorRow] || ''; + const codePoints = toCodePoints(currentLine); + + if (codePoints[commandIndex] === '/') { + // Always reset perfect match at the beginning of processing. + setIsPerfectMatch(false); + + const fullPath = currentLine.substring(commandIndex + 1); + const hasTrailingSpace = currentLine.endsWith(' '); + + // Get all non-empty parts of the command. + const rawParts = fullPath.split(/\s+/).filter((p) => p); + + let commandPathParts = rawParts; + let partial = ''; + + // If there's no trailing space, the last part is potentially a partial segment. + // We tentatively separate it. + if (!hasTrailingSpace && rawParts.length > 0) { + partial = rawParts[rawParts.length - 1]; + commandPathParts = rawParts.slice(0, -1); + } + + // Traverse the Command Tree using the tentative completed path + let currentLevel: readonly SlashCommand[] | undefined = slashCommands; + let leafCommand: SlashCommand | null = null; + + for (const part of commandPathParts) { + if (!currentLevel) { + leafCommand = null; + currentLevel = []; + break; + } + const found: SlashCommand | undefined = currentLevel.find( + (cmd) => cmd.name === part || cmd.altNames?.includes(part), + ); + if (found) { + leafCommand = found; + currentLevel = found.subCommands as + | readonly SlashCommand[] + | undefined; + } else { + leafCommand = null; + currentLevel = []; + break; + } + } + + let exactMatchAsParent: SlashCommand | undefined; + // Handle the Ambiguous Case + if (!hasTrailingSpace && currentLevel) { + exactMatchAsParent = currentLevel.find( + (cmd) => + (cmd.name === partial || cmd.altNames?.includes(partial)) && + cmd.subCommands, + ); + + if (exactMatchAsParent) { + // It's a perfect match for a parent command. Override our initial guess. + // Treat it as a completed command path. + leafCommand = exactMatchAsParent; + currentLevel = exactMatchAsParent.subCommands; + partial = ''; // We now want to suggest ALL of its sub-commands. + } + } + + // Check for perfect, executable match + if (!hasTrailingSpace) { + if (leafCommand && partial === '' && leafCommand.action) { + // Case: /command - command has action, no sub-commands were suggested + setIsPerfectMatch(true); + } else if (currentLevel) { + // Case: /command subcommand + const perfectMatch = currentLevel.find( + (cmd) => + (cmd.name === partial || cmd.altNames?.includes(partial)) && + cmd.action, + ); + if (perfectMatch) { + setIsPerfectMatch(true); + } + } + } + + const depth = commandPathParts.length; + const isArgumentCompletion = + leafCommand?.completion && + (hasTrailingSpace || + (rawParts.length > depth && depth > 0 && partial !== '')); + + // Set completion range + if (hasTrailingSpace || exactMatchAsParent) { + completionStart.current = currentLine.length; + completionEnd.current = currentLine.length; + } else if (partial) { + if (isArgumentCompletion) { + const commandSoFar = `/${commandPathParts.join(' ')}`; + const argStartIndex = + commandSoFar.length + (commandPathParts.length > 0 ? 1 : 0); + completionStart.current = argStartIndex; + } else { + completionStart.current = currentLine.length - partial.length; + } + completionEnd.current = currentLine.length; + } else { + // e.g. / + completionStart.current = commandIndex + 1; + completionEnd.current = currentLine.length; + } + + // Provide Suggestions based on the now-corrected context + if (isArgumentCompletion) { + const fetchAndSetSuggestions = async () => { + setIsLoadingSuggestions(true); + const argString = rawParts.slice(depth).join(' '); + const results = + (await leafCommand!.completion!(commandContext, argString)) || []; + const finalSuggestions = results.map((s) => ({ label: s, value: s })); + setSuggestions(finalSuggestions); + setShowSuggestions(finalSuggestions.length > 0); + setActiveSuggestionIndex(finalSuggestions.length > 0 ? 0 : -1); + setIsLoadingSuggestions(false); + }; + fetchAndSetSuggestions(); + return; + } + + // Command/Sub-command Completion + const commandsToSearch = currentLevel || []; + if (commandsToSearch.length > 0) { + let potentialSuggestions = commandsToSearch.filter( + (cmd) => + cmd.description && + (cmd.name.startsWith(partial) || + cmd.altNames?.some((alt) => alt.startsWith(partial))), + ); + + // If a user's input is an exact match and it is a leaf command, + // enter should submit immediately. + if (potentialSuggestions.length > 0 && !hasTrailingSpace) { + const perfectMatch = potentialSuggestions.find( + (s) => s.name === partial || s.altNames?.includes(partial), + ); + if (perfectMatch && perfectMatch.action) { + potentialSuggestions = []; + } + } + + const finalSuggestions = potentialSuggestions.map((cmd) => ({ + label: cmd.name, + value: cmd.name, + description: cmd.description, + })); + + setSuggestions(finalSuggestions); + setShowSuggestions(finalSuggestions.length > 0); + setActiveSuggestionIndex(finalSuggestions.length > 0 ? 0 : -1); + setIsLoadingSuggestions(false); + return; + } + + // If we fall through, no suggestions are available. + resetCompletionState(); + return; + } + + // Handle At Command Completion + completionEnd.current = codePoints.length; + for (let i = cursorCol; i < codePoints.length; i++) { + if (codePoints[i] === ' ') { + let backslashCount = 0; + for (let j = i - 1; j >= 0 && codePoints[j] === '\\'; j--) { + backslashCount++; + } + + if (backslashCount % 2 === 0) { + completionEnd.current = i; + break; + } + } + } + + const pathStart = commandIndex + 1; + const partialPath = currentLine.substring(pathStart, completionEnd.current); + const lastSlashIndex = partialPath.lastIndexOf('/'); + completionStart.current = + lastSlashIndex === -1 ? pathStart : pathStart + lastSlashIndex + 1; + const baseDirRelative = + lastSlashIndex === -1 + ? '.' + : partialPath.substring(0, lastSlashIndex + 1); + const prefix = unescapePath( + lastSlashIndex === -1 + ? partialPath + : partialPath.substring(lastSlashIndex + 1), + ); + + let isMounted = true; + + const findFilesRecursively = async ( + startDir: string, + searchPrefix: string, + fileDiscovery: FileDiscoveryService | null, + filterOptions: { + respectGitIgnore?: boolean; + respectGeminiIgnore?: boolean; + }, + currentRelativePath = '', + depth = 0, + maxDepth = 10, // Limit recursion depth + maxResults = 50, // Limit number of results + ): Promise => { + if (depth > maxDepth) { + return []; + } + + const lowerSearchPrefix = searchPrefix.toLowerCase(); + let foundSuggestions: Suggestion[] = []; + try { + const entries = await fs.readdir(startDir, { withFileTypes: true }); + for (const entry of entries) { + if (foundSuggestions.length >= maxResults) break; + + const entryPathRelative = path.join(currentRelativePath, entry.name); + const entryPathFromRoot = path.relative( + startDir, + path.join(startDir, entry.name), + ); + + // Conditionally ignore dotfiles + if (!searchPrefix.startsWith('.') && entry.name.startsWith('.')) { + continue; + } + + // Check if this entry should be ignored by filtering options + if ( + fileDiscovery && + fileDiscovery.shouldIgnoreFile(entryPathFromRoot, filterOptions) + ) { + continue; + } + + if (entry.name.toLowerCase().startsWith(lowerSearchPrefix)) { + foundSuggestions.push({ + label: entryPathRelative + (entry.isDirectory() ? '/' : ''), + value: escapePath( + entryPathRelative + (entry.isDirectory() ? '/' : ''), + ), + }); + } + if ( + entry.isDirectory() && + entry.name !== 'node_modules' && + !entry.name.startsWith('.') + ) { + if (foundSuggestions.length < maxResults) { + foundSuggestions = foundSuggestions.concat( + await findFilesRecursively( + path.join(startDir, entry.name), + searchPrefix, // Pass original searchPrefix for recursive calls + fileDiscovery, + filterOptions, + entryPathRelative, + depth + 1, + maxDepth, + maxResults - foundSuggestions.length, + ), + ); + } + } + } + } catch (_err) { + // Ignore errors like permission denied or ENOENT during recursive search + } + return foundSuggestions.slice(0, maxResults); + }; + + const findFilesWithGlob = async ( + searchPrefix: string, + fileDiscoveryService: FileDiscoveryService, + filterOptions: { + respectGitIgnore?: boolean; + respectGeminiIgnore?: boolean; + }, + searchDir: string, + maxResults = 50, + ): Promise => { + const globPattern = `**/${searchPrefix}*`; + const files = await glob(globPattern, { + cwd: searchDir, + dot: searchPrefix.startsWith('.'), + nocase: true, + }); + + const suggestions: Suggestion[] = files + .filter((file) => { + if (fileDiscoveryService) { + return !fileDiscoveryService.shouldIgnoreFile(file, filterOptions); + } + return true; + }) + .map((file: string) => { + const absolutePath = path.resolve(searchDir, file); + const label = path.relative(cwd, absolutePath); + return { + label, + value: escapePath(label), + }; + }) + .slice(0, maxResults); + + return suggestions; + }; + + const fetchSuggestions = async () => { + setIsLoadingSuggestions(true); + let fetchedSuggestions: Suggestion[] = []; + + const fileDiscoveryService = config ? config.getFileService() : null; + const enableRecursiveSearch = + config?.getEnableRecursiveFileSearch() ?? true; + const filterOptions = + config?.getFileFilteringOptions() ?? DEFAULT_FILE_FILTERING_OPTIONS; + + try { + // If there's no slash, or it's the root, do a recursive search from workspace directories + for (const dir of dirs) { + let fetchedSuggestionsPerDir: Suggestion[] = []; + if ( + partialPath.indexOf('/') === -1 && + prefix && + enableRecursiveSearch + ) { + if (fileDiscoveryService) { + fetchedSuggestionsPerDir = await findFilesWithGlob( + prefix, + fileDiscoveryService, + filterOptions, + dir, + ); + } else { + fetchedSuggestionsPerDir = await findFilesRecursively( + dir, + prefix, + null, + filterOptions, + ); + } + } else { + // Original behavior: list files in the specific directory + const lowerPrefix = prefix.toLowerCase(); + const baseDirAbsolute = path.resolve(dir, baseDirRelative); + const entries = await fs.readdir(baseDirAbsolute, { + withFileTypes: true, + }); + + // Filter entries using git-aware filtering + const filteredEntries = []; + for (const entry of entries) { + // Conditionally ignore dotfiles + if (!prefix.startsWith('.') && entry.name.startsWith('.')) { + continue; + } + if (!entry.name.toLowerCase().startsWith(lowerPrefix)) continue; + + const relativePath = path.relative( + dir, + path.join(baseDirAbsolute, entry.name), + ); + if ( + fileDiscoveryService && + fileDiscoveryService.shouldIgnoreFile( + relativePath, + filterOptions, + ) + ) { + continue; + } + + filteredEntries.push(entry); + } + + fetchedSuggestionsPerDir = filteredEntries.map((entry) => { + const absolutePath = path.resolve(baseDirAbsolute, entry.name); + const label = + cwd === dir ? entry.name : path.relative(cwd, absolutePath); + const suggestionLabel = entry.isDirectory() ? label + '/' : label; + return { + label: suggestionLabel, + value: escapePath(suggestionLabel), + }; + }); + } + fetchedSuggestions = [ + ...fetchedSuggestions, + ...fetchedSuggestionsPerDir, + ]; + } + + // Like glob, we always return forward slashes for path separators, even on Windows. + // But preserve backslash escaping for special characters. + const specialCharsLookahead = `(?![${SHELL_SPECIAL_CHARS.source.slice(1, -1)}])`; + const pathSeparatorRegex = new RegExp( + `\\\\${specialCharsLookahead}`, + 'g', + ); + fetchedSuggestions = fetchedSuggestions.map((suggestion) => ({ + ...suggestion, + label: suggestion.label.replace(pathSeparatorRegex, '/'), + value: suggestion.value.replace(pathSeparatorRegex, '/'), + })); + + // Sort by depth, then directories first, then alphabetically + fetchedSuggestions.sort((a, b) => { + const depthA = (a.label.match(/\//g) || []).length; + const depthB = (b.label.match(/\//g) || []).length; + + if (depthA !== depthB) { + return depthA - depthB; + } + + const aIsDir = a.label.endsWith('/'); + const bIsDir = b.label.endsWith('/'); + if (aIsDir && !bIsDir) return -1; + if (!aIsDir && bIsDir) return 1; + + // exclude extension when comparing + const filenameA = a.label.substring( + 0, + a.label.length - path.extname(a.label).length, + ); + const filenameB = b.label.substring( + 0, + b.label.length - path.extname(b.label).length, + ); + + return ( + filenameA.localeCompare(filenameB) || a.label.localeCompare(b.label) + ); + }); + + if (isMounted) { + setSuggestions(fetchedSuggestions); + setShowSuggestions(fetchedSuggestions.length > 0); + setActiveSuggestionIndex(fetchedSuggestions.length > 0 ? 0 : -1); + setVisibleStartIndex(0); + } + } catch (error: unknown) { + if (isNodeError(error) && error.code === 'ENOENT') { + if (isMounted) { + setSuggestions([]); + setShowSuggestions(false); + } + } else { + console.error( + `Error fetching completion suggestions for ${partialPath}: ${getErrorMessage(error)}`, + ); + if (isMounted) { + resetCompletionState(); + } + } + } + if (isMounted) { + setIsLoadingSuggestions(false); + } + }; + + const debounceTimeout = setTimeout(fetchSuggestions, 100); + + return () => { + isMounted = false; + clearTimeout(debounceTimeout); + }; + }, [ + buffer.text, + cursorRow, + cursorCol, + buffer.lines, + dirs, + cwd, + commandIndex, + resetCompletionState, + slashCommands, + commandContext, + config, + reverseSearchActive, + setSuggestions, + setShowSuggestions, + setActiveSuggestionIndex, + setIsLoadingSuggestions, + setIsPerfectMatch, + setVisibleStartIndex, + ]); + + const handleAutocomplete = useCallback( + (indexToUse: number) => { + if (indexToUse < 0 || indexToUse >= suggestions.length) { + return; + } + const suggestion = suggestions[indexToUse].value; + + if (completionStart.current === -1 || completionEnd.current === -1) { + return; + } + + const isSlash = (buffer.lines[cursorRow] || '')[commandIndex] === '/'; + let suggestionText = suggestion; + if (isSlash) { + // If we are inserting (not replacing), and the preceding character is not a space, add one. + if ( + completionStart.current === completionEnd.current && + completionStart.current > commandIndex + 1 && + (buffer.lines[cursorRow] || '')[completionStart.current - 1] !== ' ' + ) { + suggestionText = ' ' + suggestionText; + } + } + + suggestionText += ' '; + + buffer.replaceRangeByOffset( + logicalPosToOffset(buffer.lines, cursorRow, completionStart.current), + logicalPosToOffset(buffer.lines, cursorRow, completionEnd.current), + suggestionText, + ); + }, + [cursorRow, buffer, suggestions, commandIndex], + ); + + return { + suggestions, + activeSuggestionIndex, + visibleStartIndex, + showSuggestions, + isLoadingSuggestions, + isPerfectMatch, + setActiveSuggestionIndex, + setShowSuggestions, + resetCompletionState, + navigateUp, + navigateDown, + handleAutocomplete, + }; +} diff --git a/packages/cli/src/ui/hooks/useCompletion.ts b/packages/cli/src/ui/hooks/useCompletion.ts index 67244828..242b4528 100644 --- a/packages/cli/src/ui/hooks/useCompletion.ts +++ b/packages/cli/src/ui/hooks/useCompletion.ts @@ -4,27 +4,12 @@ * SPDX-License-Identifier: Apache-2.0 */ -import { useState, useEffect, useCallback, useMemo } from 'react'; -import * as fs from 'fs/promises'; -import * as path from 'path'; -import { glob } from 'glob'; -import { - isNodeError, - escapePath, - unescapePath, - getErrorMessage, - Config, - FileDiscoveryService, - DEFAULT_FILE_FILTERING_OPTIONS, -} from '@qwen-code/qwen-code-core'; +import { useState, useCallback } from 'react'; + import { MAX_SUGGESTIONS_TO_SHOW, Suggestion, } from '../components/SuggestionsDisplay.js'; -import { CommandContext, SlashCommand } from '../commands/types.js'; -import { TextBuffer } from '../components/shared/text-buffer.js'; -import { isSlashCommand } from '../utils/commandUtils.js'; -import { toCodePoints } from '../utils/textUtils.js'; export interface UseCompletionReturn { suggestions: Suggestion[]; @@ -33,21 +18,18 @@ export interface UseCompletionReturn { showSuggestions: boolean; isLoadingSuggestions: boolean; isPerfectMatch: boolean; + setSuggestions: React.Dispatch>; setActiveSuggestionIndex: React.Dispatch>; + setVisibleStartIndex: React.Dispatch>; + setIsLoadingSuggestions: React.Dispatch>; + setIsPerfectMatch: React.Dispatch>; setShowSuggestions: React.Dispatch>; resetCompletionState: () => void; navigateUp: () => void; navigateDown: () => void; - handleAutocomplete: (indexToUse: number) => void; } -export function useCompletion( - buffer: TextBuffer, - cwd: string, - slashCommands: readonly SlashCommand[], - commandContext: CommandContext, - config?: Config, -): UseCompletionReturn { +export function useCompletion(): UseCompletionReturn { const [suggestions, setSuggestions] = useState([]); const [activeSuggestionIndex, setActiveSuggestionIndex] = useState(-1); @@ -124,553 +106,6 @@ export function useCompletion( return newActiveIndex; }); }, [suggestions.length]); - - // Check if cursor is after @ or / without unescaped spaces - const isActive = useMemo(() => { - if (isSlashCommand(buffer.text.trim())) { - return true; - } - - // For other completions like '@', we search backwards from the cursor. - const [row, col] = buffer.cursor; - const currentLine = buffer.lines[row] || ''; - const codePoints = toCodePoints(currentLine); - - for (let i = col - 1; i >= 0; i--) { - const char = codePoints[i]; - - if (char === ' ') { - // Check for unescaped spaces. - let backslashCount = 0; - for (let j = i - 1; j >= 0 && codePoints[j] === '\\'; j--) { - backslashCount++; - } - if (backslashCount % 2 === 0) { - return false; // Inactive on unescaped space. - } - } else if (char === '@') { - // Active if we find an '@' before any unescaped space. - return true; - } - } - - return false; - }, [buffer.text, buffer.cursor, buffer.lines]); - - useEffect(() => { - if (!isActive) { - resetCompletionState(); - return; - } - - const trimmedQuery = buffer.text.trimStart(); - - if (trimmedQuery.startsWith('/')) { - // Always reset perfect match at the beginning of processing. - setIsPerfectMatch(false); - - const fullPath = trimmedQuery.substring(1); - const hasTrailingSpace = trimmedQuery.endsWith(' '); - - // Get all non-empty parts of the command. - const rawParts = fullPath.split(/\s+/).filter((p) => p); - - let commandPathParts = rawParts; - let partial = ''; - - // If there's no trailing space, the last part is potentially a partial segment. - // We tentatively separate it. - if (!hasTrailingSpace && rawParts.length > 0) { - partial = rawParts[rawParts.length - 1]; - commandPathParts = rawParts.slice(0, -1); - } - - // Traverse the Command Tree using the tentative completed path - let currentLevel: readonly SlashCommand[] | undefined = slashCommands; - let leafCommand: SlashCommand | null = null; - - for (const part of commandPathParts) { - if (!currentLevel) { - leafCommand = null; - currentLevel = []; - break; - } - const found: SlashCommand | undefined = currentLevel.find( - (cmd) => cmd.name === part || cmd.altNames?.includes(part), - ); - if (found) { - leafCommand = found; - currentLevel = found.subCommands as - | readonly SlashCommand[] - | undefined; - } else { - leafCommand = null; - currentLevel = []; - break; - } - } - - // Handle the Ambiguous Case - if (!hasTrailingSpace && currentLevel) { - const exactMatchAsParent = currentLevel.find( - (cmd) => - (cmd.name === partial || cmd.altNames?.includes(partial)) && - cmd.subCommands, - ); - - if (exactMatchAsParent) { - // It's a perfect match for a parent command. Override our initial guess. - // Treat it as a completed command path. - leafCommand = exactMatchAsParent; - currentLevel = exactMatchAsParent.subCommands; - partial = ''; // We now want to suggest ALL of its sub-commands. - } - } - - // Check for perfect, executable match - if (!hasTrailingSpace) { - if (leafCommand && partial === '' && leafCommand.action) { - // Case: /command - command has action, no sub-commands were suggested - setIsPerfectMatch(true); - } else if (currentLevel) { - // Case: /command subcommand - const perfectMatch = currentLevel.find( - (cmd) => - (cmd.name === partial || cmd.altNames?.includes(partial)) && - cmd.action, - ); - if (perfectMatch) { - setIsPerfectMatch(true); - } - } - } - - const depth = commandPathParts.length; - - // Provide Suggestions based on the now-corrected context - - // Argument Completion - if ( - leafCommand?.completion && - (hasTrailingSpace || - (rawParts.length > depth && depth > 0 && partial !== '')) - ) { - const fetchAndSetSuggestions = async () => { - setIsLoadingSuggestions(true); - const argString = rawParts.slice(depth).join(' '); - const results = - (await leafCommand!.completion!(commandContext, argString)) || []; - const finalSuggestions = results.map((s) => ({ label: s, value: s })); - setSuggestions(finalSuggestions); - setShowSuggestions(finalSuggestions.length > 0); - setActiveSuggestionIndex(finalSuggestions.length > 0 ? 0 : -1); - setIsLoadingSuggestions(false); - }; - fetchAndSetSuggestions(); - return; - } - - // Command/Sub-command Completion - const commandsToSearch = currentLevel || []; - if (commandsToSearch.length > 0) { - let potentialSuggestions = commandsToSearch.filter( - (cmd) => - cmd.description && - (cmd.name.startsWith(partial) || - cmd.altNames?.some((alt) => alt.startsWith(partial))), - ); - - // If a user's input is an exact match and it is a leaf command, - // enter should submit immediately. - if (potentialSuggestions.length > 0 && !hasTrailingSpace) { - const perfectMatch = potentialSuggestions.find( - (s) => s.name === partial || s.altNames?.includes(partial), - ); - if (perfectMatch && perfectMatch.action) { - potentialSuggestions = []; - } - } - - const finalSuggestions = potentialSuggestions.map((cmd) => ({ - label: cmd.name, - value: cmd.name, - description: cmd.description, - })); - - setSuggestions(finalSuggestions); - setShowSuggestions(finalSuggestions.length > 0); - setActiveSuggestionIndex(finalSuggestions.length > 0 ? 0 : -1); - setIsLoadingSuggestions(false); - return; - } - - // If we fall through, no suggestions are available. - resetCompletionState(); - return; - } - - // Handle At Command Completion - const atIndex = buffer.text.lastIndexOf('@'); - if (atIndex === -1) { - resetCompletionState(); - return; - } - - const partialPath = buffer.text.substring(atIndex + 1); - const lastSlashIndex = partialPath.lastIndexOf('/'); - const baseDirRelative = - lastSlashIndex === -1 - ? '.' - : partialPath.substring(0, lastSlashIndex + 1); - const prefix = unescapePath( - lastSlashIndex === -1 - ? partialPath - : partialPath.substring(lastSlashIndex + 1), - ); - - const baseDirAbsolute = path.resolve(cwd, baseDirRelative); - - let isMounted = true; - - const findFilesRecursively = async ( - startDir: string, - searchPrefix: string, - fileDiscovery: FileDiscoveryService | null, - filterOptions: { - respectGitIgnore?: boolean; - respectGeminiIgnore?: boolean; - }, - currentRelativePath = '', - depth = 0, - maxDepth = 10, // Limit recursion depth - maxResults = 50, // Limit number of results - ): Promise => { - if (depth > maxDepth) { - return []; - } - - const lowerSearchPrefix = searchPrefix.toLowerCase(); - let foundSuggestions: Suggestion[] = []; - try { - const entries = await fs.readdir(startDir, { withFileTypes: true }); - for (const entry of entries) { - if (foundSuggestions.length >= maxResults) break; - - const entryPathRelative = path.join(currentRelativePath, entry.name); - const entryPathFromRoot = path.relative( - cwd, - path.join(startDir, entry.name), - ); - - // Conditionally ignore dotfiles - if (!searchPrefix.startsWith('.') && entry.name.startsWith('.')) { - continue; - } - - // Check if this entry should be ignored by filtering options - if ( - fileDiscovery && - fileDiscovery.shouldIgnoreFile(entryPathFromRoot, filterOptions) - ) { - continue; - } - - if (entry.name.toLowerCase().startsWith(lowerSearchPrefix)) { - foundSuggestions.push({ - label: entryPathRelative + (entry.isDirectory() ? '/' : ''), - value: escapePath( - entryPathRelative + (entry.isDirectory() ? '/' : ''), - ), - }); - } - if ( - entry.isDirectory() && - entry.name !== 'node_modules' && - !entry.name.startsWith('.') - ) { - if (foundSuggestions.length < maxResults) { - foundSuggestions = foundSuggestions.concat( - await findFilesRecursively( - path.join(startDir, entry.name), - searchPrefix, // Pass original searchPrefix for recursive calls - fileDiscovery, - filterOptions, - entryPathRelative, - depth + 1, - maxDepth, - maxResults - foundSuggestions.length, - ), - ); - } - } - } - } catch (_err) { - // Ignore errors like permission denied or ENOENT during recursive search - } - return foundSuggestions.slice(0, maxResults); - }; - - const findFilesWithGlob = async ( - searchPrefix: string, - fileDiscoveryService: FileDiscoveryService, - filterOptions: { - respectGitIgnore?: boolean; - respectGeminiIgnore?: boolean; - }, - maxResults = 50, - ): Promise => { - const globPattern = `**/${searchPrefix}*`; - const files = await glob(globPattern, { - cwd, - dot: searchPrefix.startsWith('.'), - nocase: true, - }); - - const suggestions: Suggestion[] = files - .map((file: string) => ({ - label: file, - value: escapePath(file), - })) - .filter((s) => { - if (fileDiscoveryService) { - return !fileDiscoveryService.shouldIgnoreFile( - s.label, - filterOptions, - ); // relative path - } - return true; - }) - .slice(0, maxResults); - - return suggestions; - }; - - const fetchSuggestions = async () => { - setIsLoadingSuggestions(true); - let fetchedSuggestions: Suggestion[] = []; - - const fileDiscoveryService = config ? config.getFileService() : null; - const enableRecursiveSearch = - config?.getEnableRecursiveFileSearch() ?? true; - const filterOptions = - config?.getFileFilteringOptions() ?? DEFAULT_FILE_FILTERING_OPTIONS; - - try { - // If there's no slash, or it's the root, do a recursive search from cwd - if ( - partialPath.indexOf('/') === -1 && - prefix && - enableRecursiveSearch - ) { - if (fileDiscoveryService) { - fetchedSuggestions = await findFilesWithGlob( - prefix, - fileDiscoveryService, - filterOptions, - ); - } else { - fetchedSuggestions = await findFilesRecursively( - cwd, - prefix, - null, - filterOptions, - ); - } - } else { - // Original behavior: list files in the specific directory - const lowerPrefix = prefix.toLowerCase(); - const entries = await fs.readdir(baseDirAbsolute, { - withFileTypes: true, - }); - - // Filter entries using git-aware filtering - const filteredEntries = []; - for (const entry of entries) { - // Conditionally ignore dotfiles - if (!prefix.startsWith('.') && entry.name.startsWith('.')) { - continue; - } - if (!entry.name.toLowerCase().startsWith(lowerPrefix)) continue; - - const relativePath = path.relative( - cwd, - path.join(baseDirAbsolute, entry.name), - ); - if ( - fileDiscoveryService && - fileDiscoveryService.shouldIgnoreFile(relativePath, filterOptions) - ) { - continue; - } - - filteredEntries.push(entry); - } - - fetchedSuggestions = filteredEntries.map((entry) => { - const label = entry.isDirectory() ? entry.name + '/' : entry.name; - return { - label, - value: escapePath(label), // Value for completion should be just the name part - }; - }); - } - - // Like glob, we always return forwardslashes, even in windows. - fetchedSuggestions = fetchedSuggestions.map((suggestion) => ({ - ...suggestion, - label: suggestion.label.replace(/\\/g, '/'), - value: suggestion.value.replace(/\\/g, '/'), - })); - - // Sort by depth, then directories first, then alphabetically - fetchedSuggestions.sort((a, b) => { - const depthA = (a.label.match(/\//g) || []).length; - const depthB = (b.label.match(/\//g) || []).length; - - if (depthA !== depthB) { - return depthA - depthB; - } - - const aIsDir = a.label.endsWith('/'); - const bIsDir = b.label.endsWith('/'); - if (aIsDir && !bIsDir) return -1; - if (!aIsDir && bIsDir) return 1; - - // exclude extension when comparing - const filenameA = a.label.substring( - 0, - a.label.length - path.extname(a.label).length, - ); - const filenameB = b.label.substring( - 0, - b.label.length - path.extname(b.label).length, - ); - - return ( - filenameA.localeCompare(filenameB) || a.label.localeCompare(b.label) - ); - }); - - if (isMounted) { - setSuggestions(fetchedSuggestions); - setShowSuggestions(fetchedSuggestions.length > 0); - setActiveSuggestionIndex(fetchedSuggestions.length > 0 ? 0 : -1); - setVisibleStartIndex(0); - } - } catch (error: unknown) { - if (isNodeError(error) && error.code === 'ENOENT') { - if (isMounted) { - setSuggestions([]); - setShowSuggestions(false); - } - } else { - console.error( - `Error fetching completion suggestions for ${partialPath}: ${getErrorMessage(error)}`, - ); - if (isMounted) { - resetCompletionState(); - } - } - } - if (isMounted) { - setIsLoadingSuggestions(false); - } - }; - - const debounceTimeout = setTimeout(fetchSuggestions, 100); - - return () => { - isMounted = false; - clearTimeout(debounceTimeout); - }; - }, [ - buffer.text, - cwd, - isActive, - resetCompletionState, - slashCommands, - commandContext, - config, - ]); - - const handleAutocomplete = useCallback( - (indexToUse: number) => { - if (indexToUse < 0 || indexToUse >= suggestions.length) { - return; - } - const query = buffer.text; - const suggestion = suggestions[indexToUse].value; - - if (query.trimStart().startsWith('/')) { - const hasTrailingSpace = query.endsWith(' '); - const parts = query - .trimStart() - .substring(1) - .split(/\s+/) - .filter(Boolean); - - let isParentPath = false; - // If there's no trailing space, we need to check if the current query - // is already a complete path to a parent command. - if (!hasTrailingSpace) { - let currentLevel: readonly SlashCommand[] | undefined = slashCommands; - for (let i = 0; i < parts.length; i++) { - const part = parts[i]; - const found: SlashCommand | undefined = currentLevel?.find( - (cmd) => cmd.name === part || cmd.altNames?.includes(part), - ); - - if (found) { - if (i === parts.length - 1 && found.subCommands) { - isParentPath = true; - } - currentLevel = found.subCommands as - | readonly SlashCommand[] - | undefined; - } else { - // Path is invalid, so it can't be a parent path. - currentLevel = undefined; - break; - } - } - } - - // Determine the base path of the command. - // - If there's a trailing space, the whole command is the base. - // - If it's a known parent path, the whole command is the base. - // - If the last part is a complete argument, the whole command is the base. - // - Otherwise, the base is everything EXCEPT the last partial part. - const lastPart = parts.length > 0 ? parts[parts.length - 1] : ''; - const isLastPartACompleteArg = - lastPart.startsWith('--') && lastPart.includes('='); - - const basePath = - hasTrailingSpace || isParentPath || isLastPartACompleteArg - ? parts - : parts.slice(0, -1); - const newValue = `/${[...basePath, suggestion].join(' ')} `; - - buffer.setText(newValue); - } else { - const atIndex = query.lastIndexOf('@'); - if (atIndex === -1) return; - const pathPart = query.substring(atIndex + 1); - const lastSlashIndexInPath = pathPart.lastIndexOf('/'); - let autoCompleteStartIndex = atIndex + 1; - if (lastSlashIndexInPath !== -1) { - autoCompleteStartIndex += lastSlashIndexInPath + 1; - } - buffer.replaceRangeByOffset( - autoCompleteStartIndex, - buffer.text.length, - suggestion, - ); - } - resetCompletionState(); - }, - [resetCompletionState, buffer, suggestions, slashCommands], - ); - return { suggestions, activeSuggestionIndex, @@ -678,11 +113,16 @@ export function useCompletion( showSuggestions, isLoadingSuggestions, isPerfectMatch, - setActiveSuggestionIndex, + + setSuggestions, setShowSuggestions, + setActiveSuggestionIndex, + setVisibleStartIndex, + setIsLoadingSuggestions, + setIsPerfectMatch, + resetCompletionState, navigateUp, navigateDown, - handleAutocomplete, }; } diff --git a/packages/cli/src/ui/hooks/useGeminiStream.test.tsx b/packages/cli/src/ui/hooks/useGeminiStream.test.tsx index 89de9da2..5f89083a 100644 --- a/packages/cli/src/ui/hooks/useGeminiStream.test.tsx +++ b/packages/cli/src/ui/hooks/useGeminiStream.test.tsx @@ -30,7 +30,6 @@ import { SlashCommandProcessorResult, StreamingState, } from '../types.js'; -import { Dispatch, SetStateAction } from 'react'; import { LoadedSettings } from '../../config/settings.js'; // --- MOCKS --- @@ -257,7 +256,6 @@ describe('mergePartListUnions', () => { // --- Tests for useGeminiStream Hook --- describe('useGeminiStream', () => { let mockAddItem: Mock; - let mockSetShowHelp: Mock; let mockConfig: Config; let mockOnDebugMessage: Mock; let mockHandleSlashCommand: Mock; @@ -269,7 +267,6 @@ describe('useGeminiStream', () => { vi.clearAllMocks(); // Clear mocks before each test mockAddItem = vi.fn(); - mockSetShowHelp = vi.fn(); // Define the mock for getGeminiClient const mockGetGeminiClient = vi.fn().mockImplementation(() => { // MockedGeminiClientClass is defined in the module scope by the previous change. @@ -319,6 +316,7 @@ describe('useGeminiStream', () => { }, setQuotaErrorOccurred: vi.fn(), getQuotaErrorOccurred: vi.fn(() => false), + getModel: vi.fn(() => 'gemini-2.5-pro'), getContentGeneratorConfig: vi .fn() .mockReturnValue(contentGeneratorConfig), @@ -381,7 +379,6 @@ describe('useGeminiStream', () => { client: any; history: HistoryItem[]; addItem: UseHistoryManagerReturn['addItem']; - setShowHelp: Dispatch>; config: Config; onDebugMessage: (message: string) => void; handleSlashCommand: ( @@ -399,7 +396,6 @@ describe('useGeminiStream', () => { props.client, props.history, props.addItem, - props.setShowHelp, props.config, props.onDebugMessage, props.handleSlashCommand, @@ -416,7 +412,6 @@ describe('useGeminiStream', () => { client, history: [], addItem: mockAddItem as unknown as UseHistoryManagerReturn['addItem'], - setShowHelp: mockSetShowHelp, config: mockConfig, onDebugMessage: mockOnDebugMessage, handleSlashCommand: mockHandleSlashCommand as unknown as ( @@ -541,7 +536,6 @@ describe('useGeminiStream', () => { new MockedGeminiClientClass(mockConfig), [], mockAddItem, - mockSetShowHelp, mockConfig, mockOnDebugMessage, mockHandleSlashCommand, @@ -609,7 +603,6 @@ describe('useGeminiStream', () => { client, [], mockAddItem, - mockSetShowHelp, mockConfig, mockOnDebugMessage, mockHandleSlashCommand, @@ -706,7 +699,6 @@ describe('useGeminiStream', () => { client, [], mockAddItem, - mockSetShowHelp, mockConfig, mockOnDebugMessage, mockHandleSlashCommand, @@ -809,7 +801,6 @@ describe('useGeminiStream', () => { new MockedGeminiClientClass(mockConfig), [], mockAddItem, - mockSetShowHelp, mockConfig, mockOnDebugMessage, mockHandleSlashCommand, @@ -1160,7 +1151,6 @@ describe('useGeminiStream', () => { new MockedGeminiClientClass(mockConfig), [], mockAddItem, - mockSetShowHelp, mockConfig, mockOnDebugMessage, mockHandleSlashCommand, @@ -1212,7 +1202,6 @@ describe('useGeminiStream', () => { new MockedGeminiClientClass(testConfig), [], mockAddItem, - mockSetShowHelp, testConfig, mockOnDebugMessage, mockHandleSlashCommand, @@ -1261,7 +1250,6 @@ describe('useGeminiStream', () => { new MockedGeminiClientClass(mockConfig), [], mockAddItem, - mockSetShowHelp, mockConfig, mockOnDebugMessage, mockHandleSlashCommand, @@ -1308,7 +1296,6 @@ describe('useGeminiStream', () => { new MockedGeminiClientClass(mockConfig), [], mockAddItem, - mockSetShowHelp, mockConfig, mockOnDebugMessage, mockHandleSlashCommand, @@ -1356,7 +1343,6 @@ describe('useGeminiStream', () => { new MockedGeminiClientClass(mockConfig), [], mockAddItem, - mockSetShowHelp, mockConfig, mockOnDebugMessage, mockHandleSlashCommand, @@ -1444,7 +1430,6 @@ describe('useGeminiStream', () => { new MockedGeminiClientClass(mockConfig), [], mockAddItem, - mockSetShowHelp, mockConfig, mockOnDebugMessage, mockHandleSlashCommand, @@ -1473,4 +1458,195 @@ describe('useGeminiStream', () => { } }); }); + + describe('Thought Reset', () => { + it('should reset thought to null when starting a new prompt', async () => { + // First, simulate a response with a thought + mockSendMessageStream.mockReturnValue( + (async function* () { + yield { + type: ServerGeminiEventType.Thought, + value: { + subject: 'Previous thought', + description: 'Old description', + }, + }; + yield { + type: ServerGeminiEventType.Content, + value: 'Some response content', + }; + yield { type: ServerGeminiEventType.Finished, value: 'STOP' }; + })(), + ); + + const { result } = renderHook(() => + useGeminiStream( + new MockedGeminiClientClass(mockConfig), + [], + mockAddItem, + mockConfig, + mockOnDebugMessage, + mockHandleSlashCommand, + false, + () => 'vscode' as EditorType, + () => {}, + () => Promise.resolve(), + false, + () => {}, + ), + ); + + // Submit first query to set a thought + await act(async () => { + await result.current.submitQuery('First query'); + }); + + // Wait for the first response to complete + await waitFor(() => { + expect(mockAddItem).toHaveBeenCalledWith( + expect.objectContaining({ + type: 'gemini', + text: 'Some response content', + }), + expect.any(Number), + ); + }); + + // Now simulate a new response without a thought + mockSendMessageStream.mockReturnValue( + (async function* () { + yield { + type: ServerGeminiEventType.Content, + value: 'New response content', + }; + yield { type: ServerGeminiEventType.Finished, value: 'STOP' }; + })(), + ); + + // Submit second query - thought should be reset + await act(async () => { + await result.current.submitQuery('Second query'); + }); + + // The thought should be reset to null when starting the new prompt + // We can verify this by checking that the LoadingIndicator would not show the previous thought + // The actual thought state is internal to the hook, but we can verify the behavior + // by ensuring the second response doesn't show the previous thought + await waitFor(() => { + expect(mockAddItem).toHaveBeenCalledWith( + expect.objectContaining({ + type: 'gemini', + text: 'New response content', + }), + expect.any(Number), + ); + }); + }); + + it('should reset thought to null when user cancels', async () => { + // Mock a stream that yields a thought then gets cancelled + mockSendMessageStream.mockReturnValue( + (async function* () { + yield { + type: ServerGeminiEventType.Thought, + value: { subject: 'Some thought', description: 'Description' }, + }; + yield { type: ServerGeminiEventType.UserCancelled }; + })(), + ); + + const { result } = renderHook(() => + useGeminiStream( + new MockedGeminiClientClass(mockConfig), + [], + mockAddItem, + mockConfig, + mockOnDebugMessage, + mockHandleSlashCommand, + false, + () => 'vscode' as EditorType, + () => {}, + () => Promise.resolve(), + false, + () => {}, + ), + ); + + // Submit query + await act(async () => { + await result.current.submitQuery('Test query'); + }); + + // Verify cancellation message was added + await waitFor(() => { + expect(mockAddItem).toHaveBeenCalledWith( + expect.objectContaining({ + type: 'info', + text: 'User cancelled the request.', + }), + expect.any(Number), + ); + }); + + // Verify state is reset to idle + expect(result.current.streamingState).toBe(StreamingState.Idle); + }); + + it('should reset thought to null when there is an error', async () => { + // Mock a stream that yields a thought then encounters an error + mockSendMessageStream.mockReturnValue( + (async function* () { + yield { + type: ServerGeminiEventType.Thought, + value: { subject: 'Some thought', description: 'Description' }, + }; + yield { + type: ServerGeminiEventType.Error, + value: { error: { message: 'Test error' } }, + }; + })(), + ); + + const { result } = renderHook(() => + useGeminiStream( + new MockedGeminiClientClass(mockConfig), + [], + mockAddItem, + mockConfig, + mockOnDebugMessage, + mockHandleSlashCommand, + false, + () => 'vscode' as EditorType, + () => {}, + () => Promise.resolve(), + false, + () => {}, + ), + ); + + // Submit query + await act(async () => { + await result.current.submitQuery('Test query'); + }); + + // Verify error message was added + await waitFor(() => { + expect(mockAddItem).toHaveBeenCalledWith( + expect.objectContaining({ + type: 'error', + }), + expect.any(Number), + ); + }); + + // Verify parseAndFormatApiError was called + expect(mockParseAndFormatApiError).toHaveBeenCalledWith( + { message: 'Test error' }, + expect.any(String), + undefined, + 'gemini-2.5-pro', + 'gemini-2.5-flash', + ); + }); + }); }); diff --git a/packages/cli/src/ui/hooks/useGeminiStream.ts b/packages/cli/src/ui/hooks/useGeminiStream.ts index 7ff3515b..85614d3b 100644 --- a/packages/cli/src/ui/hooks/useGeminiStream.ts +++ b/packages/cli/src/ui/hooks/useGeminiStream.ts @@ -82,7 +82,6 @@ export const useGeminiStream = ( geminiClient: GeminiClient, history: HistoryItem[], addItem: UseHistoryManagerReturn['addItem'], - setShowHelp: React.Dispatch>, config: Config, onDebugMessage: (message: string) => void, handleSlashCommand: ( @@ -414,8 +413,9 @@ export const useGeminiStream = ( userMessageTimestamp, ); setIsResponding(false); + setThought(null); // Reset thought when user cancels }, - [addItem, pendingHistoryItemRef, setPendingHistoryItem], + [addItem, pendingHistoryItemRef, setPendingHistoryItem, setThought], ); const handleErrorEvent = useCallback( @@ -437,8 +437,9 @@ export const useGeminiStream = ( }, userMessageTimestamp, ); + setThought(null); // Reset thought when there's an error }, - [addItem, pendingHistoryItemRef, setPendingHistoryItem, config], + [addItem, pendingHistoryItemRef, setPendingHistoryItem, config, setThought], ); const handleFinishedEvent = useCallback( @@ -629,7 +630,6 @@ export const useGeminiStream = ( return; const userMessageTimestamp = Date.now(); - setShowHelp(false); // Reset quota error flag when starting a new query (not a continuation) if (!options?.isContinuation) { @@ -658,6 +658,7 @@ export const useGeminiStream = ( if (!options?.isContinuation) { startNewPrompt(); + setThought(null); // Reset thought when starting a new prompt } setIsResponding(true); @@ -711,7 +712,6 @@ export const useGeminiStream = ( }, [ streamingState, - setShowHelp, setModelSwitchedFromQuotaError, prepareQueryForGemini, processGeminiStreamEvents, diff --git a/packages/cli/src/ui/hooks/usePhraseCycler.ts b/packages/cli/src/ui/hooks/usePhraseCycler.ts index dc0993f0..83d68601 100644 --- a/packages/cli/src/ui/hooks/usePhraseCycler.ts +++ b/packages/cli/src/ui/hooks/usePhraseCycler.ts @@ -138,6 +138,7 @@ export const WITTY_LOADING_PHRASES = [ 'Enhancing... Enhancing... Still loading.', "It's not a bug, it's a feature... of this loading screen.", 'Have you tried turning it off and on again? (The loading screen, not me.)', + 'Constructing additional pylons...', ]; export const PHRASE_CHANGE_INTERVAL_MS = 15000; diff --git a/packages/cli/src/ui/hooks/useReactToolScheduler.ts b/packages/cli/src/ui/hooks/useReactToolScheduler.ts index cbe31d75..7fdcb590 100644 --- a/packages/cli/src/ui/hooks/useReactToolScheduler.ts +++ b/packages/cli/src/ui/hooks/useReactToolScheduler.ts @@ -138,7 +138,6 @@ export function useReactToolScheduler( outputUpdateHandler, onAllToolCallsComplete: allToolCallsCompleteHandler, onToolCallsUpdate: toolCallsUpdateHandler, - approvalMode: config.getApprovalMode(), getPreferredEditor, config, }), diff --git a/packages/cli/src/ui/hooks/useReverseSearchCompletion.test.tsx b/packages/cli/src/ui/hooks/useReverseSearchCompletion.test.tsx new file mode 100644 index 00000000..373696ce --- /dev/null +++ b/packages/cli/src/ui/hooks/useReverseSearchCompletion.test.tsx @@ -0,0 +1,260 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +/** @vitest-environment jsdom */ + +import { describe, it, expect } from 'vitest'; +import { renderHook, act } from '@testing-library/react'; +import { useReverseSearchCompletion } from './useReverseSearchCompletion.js'; +import { useTextBuffer } from '../components/shared/text-buffer.js'; + +describe('useReverseSearchCompletion', () => { + function useTextBufferForTest(text: string) { + return useTextBuffer({ + initialText: text, + initialCursorOffset: text.length, + viewport: { width: 80, height: 20 }, + isValidPath: () => false, + onChange: () => {}, + }); + } + + describe('Core Hook Behavior', () => { + describe('State Management', () => { + it('should initialize with default state', () => { + const mockShellHistory = ['echo hello']; + + const { result } = renderHook(() => + useReverseSearchCompletion( + useTextBufferForTest(''), + mockShellHistory, + false, + ), + ); + + expect(result.current.suggestions).toEqual([]); + expect(result.current.activeSuggestionIndex).toBe(-1); + expect(result.current.visibleStartIndex).toBe(0); + expect(result.current.showSuggestions).toBe(false); + expect(result.current.isLoadingSuggestions).toBe(false); + }); + + it('should reset state when reverseSearchActive becomes false', () => { + const mockShellHistory = ['echo hello']; + const { result, rerender } = renderHook( + ({ text, active }) => { + const textBuffer = useTextBufferForTest(text); + return useReverseSearchCompletion( + textBuffer, + mockShellHistory, + active, + ); + }, + { initialProps: { text: 'echo', active: true } }, + ); + + // Simulate reverseSearchActive becoming false + rerender({ text: 'echo', active: false }); + + expect(result.current.suggestions).toEqual([]); + expect(result.current.activeSuggestionIndex).toBe(-1); + expect(result.current.visibleStartIndex).toBe(0); + expect(result.current.showSuggestions).toBe(false); + }); + + describe('Navigation', () => { + it('should handle navigateUp with no suggestions', () => { + const mockShellHistory = ['echo hello']; + + const { result } = renderHook(() => + useReverseSearchCompletion( + useTextBufferForTest('grep'), + mockShellHistory, + true, + ), + ); + + act(() => { + result.current.navigateUp(); + }); + + expect(result.current.activeSuggestionIndex).toBe(-1); + }); + + it('should handle navigateDown with no suggestions', () => { + const mockShellHistory = ['echo hello']; + const { result } = renderHook(() => + useReverseSearchCompletion( + useTextBufferForTest('grep'), + mockShellHistory, + true, + ), + ); + + act(() => { + result.current.navigateDown(); + }); + + expect(result.current.activeSuggestionIndex).toBe(-1); + }); + + it('should navigate up through suggestions with wrap-around', () => { + const mockShellHistory = [ + 'ls -l', + 'ls -la', + 'cd /some/path', + 'git status', + 'echo "Hello, World!"', + 'echo Hi', + ]; + + const { result } = renderHook(() => + useReverseSearchCompletion( + useTextBufferForTest('echo'), + mockShellHistory, + true, + ), + ); + + expect(result.current.suggestions.length).toBe(2); + expect(result.current.activeSuggestionIndex).toBe(0); + + act(() => { + result.current.navigateUp(); + }); + + expect(result.current.activeSuggestionIndex).toBe(1); + }); + + it('should navigate down through suggestions with wrap-around', () => { + const mockShellHistory = [ + 'ls -l', + 'ls -la', + 'cd /some/path', + 'git status', + 'echo "Hello, World!"', + 'echo Hi', + ]; + const { result } = renderHook(() => + useReverseSearchCompletion( + useTextBufferForTest('ls'), + mockShellHistory, + true, + ), + ); + + expect(result.current.suggestions.length).toBe(2); + expect(result.current.activeSuggestionIndex).toBe(0); + + act(() => { + result.current.navigateDown(); + }); + + expect(result.current.activeSuggestionIndex).toBe(1); + }); + + it('should handle navigation with multiple suggestions', () => { + const mockShellHistory = [ + 'ls -l', + 'ls -la', + 'cd /some/path/l', + 'git status', + 'echo "Hello, World!"', + 'echo "Hi all"', + ]; + + const { result } = renderHook(() => + useReverseSearchCompletion( + useTextBufferForTest('l'), + mockShellHistory, + true, + ), + ); + + expect(result.current.suggestions.length).toBe(5); + expect(result.current.activeSuggestionIndex).toBe(0); + + act(() => { + result.current.navigateDown(); + }); + expect(result.current.activeSuggestionIndex).toBe(1); + + act(() => { + result.current.navigateDown(); + }); + expect(result.current.activeSuggestionIndex).toBe(2); + + act(() => { + result.current.navigateUp(); + }); + expect(result.current.activeSuggestionIndex).toBe(1); + + act(() => { + result.current.navigateUp(); + }); + expect(result.current.activeSuggestionIndex).toBe(0); + + act(() => { + result.current.navigateUp(); + }); + expect(result.current.activeSuggestionIndex).toBe(4); + }); + + it('should handle navigation with large suggestion lists and scrolling', () => { + const largeMockCommands = Array.from( + { length: 15 }, + (_, i) => `echo ${i}`, + ); + + const { result } = renderHook(() => + useReverseSearchCompletion( + useTextBufferForTest('echo'), + largeMockCommands, + true, + ), + ); + + expect(result.current.suggestions.length).toBe(15); + expect(result.current.activeSuggestionIndex).toBe(0); + expect(result.current.visibleStartIndex).toBe(0); + + act(() => { + result.current.navigateUp(); + }); + + expect(result.current.activeSuggestionIndex).toBe(14); + expect(result.current.visibleStartIndex).toBe(Math.max(0, 15 - 8)); + }); + }); + }); + }); + + describe('Filtering', () => { + it('filters history by buffer.text and sets showSuggestions', () => { + const history = ['foo', 'barfoo', 'baz']; + const { result } = renderHook(() => + useReverseSearchCompletion(useTextBufferForTest('foo'), history, true), + ); + + // should only return the two entries containing "foo" + expect(result.current.suggestions.map((s) => s.value)).toEqual([ + 'foo', + 'barfoo', + ]); + expect(result.current.showSuggestions).toBe(true); + }); + + it('hides suggestions when there are no matches', () => { + const history = ['alpha', 'beta']; + const { result } = renderHook(() => + useReverseSearchCompletion(useTextBufferForTest('γ'), history, true), + ); + + expect(result.current.suggestions).toEqual([]); + expect(result.current.showSuggestions).toBe(false); + }); + }); +}); diff --git a/packages/cli/src/ui/hooks/useReverseSearchCompletion.tsx b/packages/cli/src/ui/hooks/useReverseSearchCompletion.tsx new file mode 100644 index 00000000..1cc7e602 --- /dev/null +++ b/packages/cli/src/ui/hooks/useReverseSearchCompletion.tsx @@ -0,0 +1,91 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import { useEffect, useCallback } from 'react'; +import { useCompletion } from './useCompletion.js'; +import { TextBuffer } from '../components/shared/text-buffer.js'; +import { Suggestion } from '../components/SuggestionsDisplay.js'; + +export interface UseReverseSearchCompletionReturn { + suggestions: Suggestion[]; + activeSuggestionIndex: number; + visibleStartIndex: number; + showSuggestions: boolean; + isLoadingSuggestions: boolean; + navigateUp: () => void; + navigateDown: () => void; + handleAutocomplete: (i: number) => void; + resetCompletionState: () => void; +} + +export function useReverseSearchCompletion( + buffer: TextBuffer, + shellHistory: readonly string[], + reverseSearchActive: boolean, +): UseReverseSearchCompletionReturn { + const { + suggestions, + activeSuggestionIndex, + visibleStartIndex, + showSuggestions, + isLoadingSuggestions, + + setSuggestions, + setShowSuggestions, + setActiveSuggestionIndex, + resetCompletionState, + navigateUp, + navigateDown, + } = useCompletion(); + + // whenever reverseSearchActive is on, filter history + useEffect(() => { + if (!reverseSearchActive) { + resetCompletionState(); + return; + } + const q = buffer.text.toLowerCase(); + const matches = shellHistory.reduce((acc, cmd) => { + const idx = cmd.toLowerCase().indexOf(q); + if (idx !== -1) { + acc.push({ label: cmd, value: cmd, matchedIndex: idx }); + } + return acc; + }, []); + setSuggestions(matches); + setShowSuggestions(matches.length > 0); + setActiveSuggestionIndex(matches.length > 0 ? 0 : -1); + }, [ + buffer.text, + shellHistory, + reverseSearchActive, + resetCompletionState, + setActiveSuggestionIndex, + setShowSuggestions, + setSuggestions, + ]); + + const handleAutocomplete = useCallback( + (i: number) => { + if (i < 0 || i >= suggestions.length) return; + buffer.setText(suggestions[i].value); + resetCompletionState(); + }, + [buffer, suggestions, resetCompletionState], + ); + + return { + suggestions, + activeSuggestionIndex, + visibleStartIndex, + showSuggestions, + isLoadingSuggestions, + navigateUp, + navigateDown, + handleAutocomplete, + resetCompletionState, + }; +} diff --git a/packages/cli/src/ui/hooks/useShellHistory.ts b/packages/cli/src/ui/hooks/useShellHistory.ts index 5701de57..47062abe 100644 --- a/packages/cli/src/ui/hooks/useShellHistory.ts +++ b/packages/cli/src/ui/hooks/useShellHistory.ts @@ -13,6 +13,7 @@ const HISTORY_FILE = 'shell_history'; const MAX_HISTORY_LENGTH = 100; export interface UseShellHistoryReturn { + history: string[]; addCommandToHistory: (command: string) => void; getPreviousCommand: () => string | null; getNextCommand: () => string | null; @@ -24,15 +25,32 @@ async function getHistoryFilePath(projectRoot: string): Promise { return path.join(historyDir, HISTORY_FILE); } +// Handle multiline commands async function readHistoryFile(filePath: string): Promise { try { - const content = await fs.readFile(filePath, 'utf-8'); - return content.split('\n').filter(Boolean); - } catch (error) { - if (isNodeError(error) && error.code === 'ENOENT') { - return []; + const text = await fs.readFile(filePath, 'utf-8'); + const result: string[] = []; + let cur = ''; + + for (const raw of text.split(/\r?\n/)) { + if (!raw.trim()) continue; + const line = raw; + + const m = cur.match(/(\\+)$/); + if (m && m[1].length % 2) { + // odd number of trailing '\' + cur = cur.slice(0, -1) + ' ' + line; + } else { + if (cur) result.push(cur); + cur = line; + } } - console.error('Error reading shell history:', error); + + if (cur) result.push(cur); + return result; + } catch (err) { + if (isNodeError(err) && err.code === 'ENOENT') return []; + console.error('Error reading history:', err); return []; } } @@ -101,10 +119,15 @@ export function useShellHistory(projectRoot: string): UseShellHistoryReturn { return history[newIndex] ?? null; }, [history, historyIndex]); + const resetHistoryPosition = useCallback(() => { + setHistoryIndex(-1); + }, []); + return { + history, addCommandToHistory, getPreviousCommand, getNextCommand, - resetHistoryPosition: () => setHistoryIndex(-1), + resetHistoryPosition, }; } diff --git a/packages/cli/src/ui/types.ts b/packages/cli/src/ui/types.ts index c67eaa02..75462bca 100644 --- a/packages/cli/src/ui/types.ts +++ b/packages/cli/src/ui/types.ts @@ -97,6 +97,11 @@ export type HistoryItemAbout = HistoryItemBase & { gcpProject: string; }; +export type HistoryItemHelp = HistoryItemBase & { + type: 'help'; + timestamp: Date; +}; + export type HistoryItemStats = HistoryItemBase & { type: 'stats'; duration: string; @@ -142,6 +147,7 @@ export type HistoryItemWithoutId = | HistoryItemInfo | HistoryItemError | HistoryItemAbout + | HistoryItemHelp | HistoryItemToolGroup | HistoryItemStats | HistoryItemModelStats @@ -157,6 +163,7 @@ export enum MessageType { ERROR = 'error', USER = 'user', ABOUT = 'about', + HELP = 'help', STATS = 'stats', MODEL_STATS = 'model_stats', TOOL_STATS = 'tool_stats', @@ -183,6 +190,11 @@ export type Message = gcpProject: string; content?: string; // Optional content, not really used for ABOUT } + | { + type: MessageType.HELP; + timestamp: Date; + content?: string; // Optional content, not really used for HELP + } | { type: MessageType.STATS; timestamp: Date; diff --git a/packages/cli/src/ui/utils/updateCheck.test.ts b/packages/cli/src/ui/utils/updateCheck.test.ts index 975c320d..c2b56a03 100644 --- a/packages/cli/src/ui/utils/updateCheck.test.ts +++ b/packages/cli/src/ui/utils/updateCheck.test.ts @@ -19,11 +19,17 @@ vi.mock('update-notifier', () => ({ describe('checkForUpdates', () => { beforeEach(() => { + vi.useFakeTimers(); vi.resetAllMocks(); // Clear DEV environment variable before each test delete process.env.DEV; }); + afterEach(() => { + vi.useRealTimers(); + vi.restoreAllMocks(); + }); + it('should return null when running from source (DEV=true)', async () => { process.env.DEV = 'true'; getPackageJson.mockResolvedValue({ @@ -31,7 +37,9 @@ describe('checkForUpdates', () => { version: '1.0.0', }); updateNotifier.mockReturnValue({ - update: { current: '1.0.0', latest: '1.1.0' }, + fetchInfo: vi + .fn() + .mockResolvedValue({ current: '1.0.0', latest: '1.1.0' }), }); const result = await checkForUpdates(); expect(result).toBeNull(); @@ -50,7 +58,9 @@ describe('checkForUpdates', () => { name: 'test-package', version: '1.0.0', }); - updateNotifier.mockReturnValue({ update: null }); + updateNotifier.mockReturnValue({ + fetchInfo: vi.fn().mockResolvedValue(null), + }); const result = await checkForUpdates(); expect(result).toBeNull(); }); @@ -61,10 +71,14 @@ describe('checkForUpdates', () => { version: '1.0.0', }); updateNotifier.mockReturnValue({ - update: { current: '1.0.0', latest: '1.1.0' }, + fetchInfo: vi + .fn() + .mockResolvedValue({ current: '1.0.0', latest: '1.1.0' }), }); + const result = await checkForUpdates(); - expect(result).toContain('1.0.0 → 1.1.0'); + expect(result?.message).toContain('1.0.0 → 1.1.0'); + expect(result?.update).toEqual({ current: '1.0.0', latest: '1.1.0' }); }); it('should return null if the latest version is the same as the current version', async () => { @@ -73,7 +87,9 @@ describe('checkForUpdates', () => { version: '1.0.0', }); updateNotifier.mockReturnValue({ - update: { current: '1.0.0', latest: '1.0.0' }, + fetchInfo: vi + .fn() + .mockResolvedValue({ current: '1.0.0', latest: '1.0.0' }), }); const result = await checkForUpdates(); expect(result).toBeNull(); @@ -85,15 +101,63 @@ describe('checkForUpdates', () => { version: '1.1.0', }); updateNotifier.mockReturnValue({ - update: { current: '1.1.0', latest: '1.0.0' }, + fetchInfo: vi + .fn() + .mockResolvedValue({ current: '1.1.0', latest: '1.0.0' }), }); const result = await checkForUpdates(); expect(result).toBeNull(); }); + it('should return null if fetchInfo rejects', async () => { + getPackageJson.mockResolvedValue({ + name: 'test-package', + version: '1.0.0', + }); + updateNotifier.mockReturnValue({ + fetchInfo: vi.fn().mockRejectedValue(new Error('Timeout')), + }); + + const result = await checkForUpdates(); + expect(result).toBeNull(); + }); + it('should handle errors gracefully', async () => { getPackageJson.mockRejectedValue(new Error('test error')); const result = await checkForUpdates(); expect(result).toBeNull(); }); + + describe('nightly updates', () => { + it('should notify for a newer nightly version when current is nightly', async () => { + getPackageJson.mockResolvedValue({ + name: 'test-package', + version: '1.2.3-nightly.1', + }); + + const fetchInfoMock = vi.fn().mockImplementation(({ distTag }) => { + if (distTag === 'nightly') { + return Promise.resolve({ + latest: '1.2.3-nightly.2', + current: '1.2.3-nightly.1', + }); + } + if (distTag === 'latest') { + return Promise.resolve({ + latest: '1.2.3', + current: '1.2.3-nightly.1', + }); + } + return Promise.resolve(null); + }); + + updateNotifier.mockImplementation(({ pkg, distTag }) => ({ + fetchInfo: () => fetchInfoMock({ pkg, distTag }), + })); + + const result = await checkForUpdates(); + expect(result?.message).toContain('1.2.3-nightly.1 → 1.2.3-nightly.2'); + expect(result?.update.latest).toBe('1.2.3-nightly.2'); + }); + }); }); diff --git a/packages/cli/src/ui/utils/updateCheck.ts b/packages/cli/src/ui/utils/updateCheck.ts index f02c95ca..2ee84520 100644 --- a/packages/cli/src/ui/utils/updateCheck.ts +++ b/packages/cli/src/ui/utils/updateCheck.ts @@ -4,37 +4,92 @@ * SPDX-License-Identifier: Apache-2.0 */ -import updateNotifier from 'update-notifier'; +import updateNotifier, { UpdateInfo } from 'update-notifier'; import semver from 'semver'; import { getPackageJson } from '../../utils/package.js'; -export async function checkForUpdates(): Promise { +export const FETCH_TIMEOUT_MS = 2000; + +export interface UpdateObject { + message: string; + update: UpdateInfo; +} + +/** + * From a nightly and stable update, determines which is the "best" one to offer. + * The rule is to always prefer nightly if the base versions are the same. + */ +function getBestAvailableUpdate( + nightly?: UpdateInfo, + stable?: UpdateInfo, +): UpdateInfo | null { + if (!nightly) return stable || null; + if (!stable) return nightly || null; + + const nightlyVer = nightly.latest; + const stableVer = stable.latest; + + if ( + semver.coerce(stableVer)?.version === semver.coerce(nightlyVer)?.version + ) { + return nightly; + } + + return semver.gt(stableVer, nightlyVer) ? stable : nightly; +} + +export async function checkForUpdates(): Promise { try { // Skip update check when running from source (development mode) if (process.env.DEV === 'true') { return null; } - const packageJson = await getPackageJson(); if (!packageJson || !packageJson.name || !packageJson.version) { return null; } - const notifier = updateNotifier({ - pkg: { - name: packageJson.name, - version: packageJson.version, - }, - // check every time - updateCheckInterval: 0, - // allow notifier to run in scripts - shouldNotifyInNpmScript: true, - }); - if ( - notifier.update && - semver.gt(notifier.update.latest, notifier.update.current) - ) { - return `Qwen Code update available! ${notifier.update.current} → ${notifier.update.latest}\nRun npm install -g ${packageJson.name} to update`; + const { name, version: currentVersion } = packageJson; + const isNightly = currentVersion.includes('nightly'); + const createNotifier = (distTag: 'latest' | 'nightly') => + updateNotifier({ + pkg: { + name, + version: currentVersion, + }, + updateCheckInterval: 0, + shouldNotifyInNpmScript: true, + distTag, + }); + + if (isNightly) { + const [nightlyUpdateInfo, latestUpdateInfo] = await Promise.all([ + createNotifier('nightly').fetchInfo(), + createNotifier('latest').fetchInfo(), + ]); + + const bestUpdate = getBestAvailableUpdate( + nightlyUpdateInfo, + latestUpdateInfo, + ); + + if (bestUpdate && semver.gt(bestUpdate.latest, currentVersion)) { + const message = `A new version of Qwen Code is available! ${currentVersion} → ${bestUpdate.latest}`; + return { + message, + update: { ...bestUpdate, current: currentVersion }, + }; + } + } else { + const updateInfo = await createNotifier('latest').fetchInfo(); + + if (updateInfo && semver.gt(updateInfo.latest, currentVersion)) { + const message = `Qwen Code update available! ${currentVersion} → ${updateInfo.latest}`; + return { + message, + update: { ...updateInfo, current: currentVersion }, + }; + } } return null; diff --git a/packages/cli/src/utils/gitUtils.ts b/packages/cli/src/utils/gitUtils.ts new file mode 100644 index 00000000..d510008c --- /dev/null +++ b/packages/cli/src/utils/gitUtils.ts @@ -0,0 +1,26 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import { execSync } from 'child_process'; + +/** + * Checks if a directory is within a git repository hosted on GitHub. + * @returns true if the directory is in a git repository with a github.com remote, false otherwise + */ +export function isGitHubRepository(): boolean { + try { + const remotes = execSync('git remote -v', { + encoding: 'utf-8', + }); + + const pattern = /github\.com/; + + return pattern.test(remotes); + } catch (_error) { + // If any filesystem error occurs, assume not a git repo + return false; + } +} diff --git a/packages/cli/src/utils/handleAutoUpdate.test.ts b/packages/cli/src/utils/handleAutoUpdate.test.ts new file mode 100644 index 00000000..eb54fced --- /dev/null +++ b/packages/cli/src/utils/handleAutoUpdate.test.ts @@ -0,0 +1,272 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import { describe, it, expect, vi, beforeEach, afterEach, Mock } from 'vitest'; +import { getInstallationInfo, PackageManager } from './installationInfo.js'; +import { updateEventEmitter } from './updateEventEmitter.js'; +import { UpdateObject } from '../ui/utils/updateCheck.js'; +import { LoadedSettings } from '../config/settings.js'; +import EventEmitter from 'node:events'; +import { handleAutoUpdate } from './handleAutoUpdate.js'; + +vi.mock('./installationInfo.js', async () => { + const actual = await vi.importActual('./installationInfo.js'); + return { + ...actual, + getInstallationInfo: vi.fn(), + }; +}); + +vi.mock('./updateEventEmitter.js', async () => { + const actual = await vi.importActual('./updateEventEmitter.js'); + return { + ...actual, + updateEventEmitter: { + ...actual.updateEventEmitter, + emit: vi.fn(), + }, + }; +}); + +interface MockChildProcess extends EventEmitter { + stdin: EventEmitter & { + write: Mock; + end: Mock; + }; + stderr: EventEmitter; +} + +const mockGetInstallationInfo = vi.mocked(getInstallationInfo); +const mockUpdateEventEmitter = vi.mocked(updateEventEmitter); + +describe('handleAutoUpdate', () => { + let mockSpawn: Mock; + let mockUpdateInfo: UpdateObject; + let mockSettings: LoadedSettings; + let mockChildProcess: MockChildProcess; + + beforeEach(() => { + mockSpawn = vi.fn(); + vi.clearAllMocks(); + mockUpdateInfo = { + update: { + latest: '2.0.0', + current: '1.0.0', + type: 'major', + name: '@qwen-code/qwen-code', + }, + message: 'An update is available!', + }; + + mockSettings = { + merged: { + disableAutoUpdate: false, + }, + } as LoadedSettings; + + mockChildProcess = Object.assign(new EventEmitter(), { + stdin: Object.assign(new EventEmitter(), { + write: vi.fn(), + end: vi.fn(), + }), + stderr: new EventEmitter(), + }) as MockChildProcess; + + mockSpawn.mockReturnValue( + mockChildProcess as unknown as ReturnType, + ); + }); + + afterEach(() => { + vi.clearAllMocks(); + }); + + it('should do nothing if update info is null', () => { + handleAutoUpdate(null, mockSettings, '/root', mockSpawn); + expect(mockGetInstallationInfo).not.toHaveBeenCalled(); + expect(mockUpdateEventEmitter.emit).not.toHaveBeenCalled(); + expect(mockSpawn).not.toHaveBeenCalled(); + }); + + it('should do nothing if update nag is disabled', () => { + mockSettings.merged.disableUpdateNag = true; + handleAutoUpdate(mockUpdateInfo, mockSettings, '/root', mockSpawn); + expect(mockGetInstallationInfo).not.toHaveBeenCalled(); + expect(mockUpdateEventEmitter.emit).not.toHaveBeenCalled(); + expect(mockSpawn).not.toHaveBeenCalled(); + }); + + it('should emit "update-received" but not update if auto-updates are disabled', () => { + mockSettings.merged.disableAutoUpdate = true; + mockGetInstallationInfo.mockReturnValue({ + updateCommand: 'npm i -g @qwen-code/qwen-code@latest', + updateMessage: 'Please update manually.', + isGlobal: true, + packageManager: PackageManager.NPM, + }); + + handleAutoUpdate(mockUpdateInfo, mockSettings, '/root', mockSpawn); + + expect(mockUpdateEventEmitter.emit).toHaveBeenCalledTimes(1); + expect(mockUpdateEventEmitter.emit).toHaveBeenCalledWith( + 'update-received', + { + message: 'An update is available!\nPlease update manually.', + }, + ); + expect(mockSpawn).not.toHaveBeenCalled(); + }); + + it('should emit "update-received" but not update if no update command is found', () => { + mockGetInstallationInfo.mockReturnValue({ + updateCommand: undefined, + updateMessage: 'Cannot determine update command.', + isGlobal: false, + packageManager: PackageManager.NPM, + }); + + handleAutoUpdate(mockUpdateInfo, mockSettings, '/root', mockSpawn); + + expect(mockUpdateEventEmitter.emit).toHaveBeenCalledTimes(1); + expect(mockUpdateEventEmitter.emit).toHaveBeenCalledWith( + 'update-received', + { + message: 'An update is available!\nCannot determine update command.', + }, + ); + expect(mockSpawn).not.toHaveBeenCalled(); + }); + + it('should combine update messages correctly', () => { + mockGetInstallationInfo.mockReturnValue({ + updateCommand: undefined, // No command to prevent spawn + updateMessage: 'This is an additional message.', + isGlobal: false, + packageManager: PackageManager.NPM, + }); + + handleAutoUpdate(mockUpdateInfo, mockSettings, '/root', mockSpawn); + + expect(mockUpdateEventEmitter.emit).toHaveBeenCalledTimes(1); + expect(mockUpdateEventEmitter.emit).toHaveBeenCalledWith( + 'update-received', + { + message: 'An update is available!\nThis is an additional message.', + }, + ); + }); + + it('should attempt to perform an update when conditions are met', async () => { + mockGetInstallationInfo.mockReturnValue({ + updateCommand: 'npm i -g @qwen-code/qwen-code@latest', + updateMessage: 'This is an additional message.', + isGlobal: false, + packageManager: PackageManager.NPM, + }); + + // Simulate successful execution + setTimeout(() => { + mockChildProcess.emit('close', 0); + }, 0); + + handleAutoUpdate(mockUpdateInfo, mockSettings, '/root', mockSpawn); + + expect(mockSpawn).toHaveBeenCalledOnce(); + }); + + it('should emit "update-failed" when the update process fails', async () => { + await new Promise((resolve) => { + mockGetInstallationInfo.mockReturnValue({ + updateCommand: 'npm i -g @qwen-code/qwen-code@latest', + updateMessage: 'This is an additional message.', + isGlobal: false, + packageManager: PackageManager.NPM, + }); + + // Simulate failed execution + setTimeout(() => { + mockChildProcess.stderr.emit('data', 'An error occurred'); + mockChildProcess.emit('close', 1); + resolve(); + }, 0); + + handleAutoUpdate(mockUpdateInfo, mockSettings, '/root', mockSpawn); + }); + + expect(mockUpdateEventEmitter.emit).toHaveBeenCalledWith('update-failed', { + message: + 'Automatic update failed. Please try updating manually. (command: npm i -g @qwen-code/qwen-code@2.0.0, stderr: An error occurred)', + }); + }); + + it('should emit "update-failed" when the spawn function throws an error', async () => { + await new Promise((resolve) => { + mockGetInstallationInfo.mockReturnValue({ + updateCommand: 'npm i -g @qwen-code/qwen-code@latest', + updateMessage: 'This is an additional message.', + isGlobal: false, + packageManager: PackageManager.NPM, + }); + + // Simulate an error event + setTimeout(() => { + mockChildProcess.emit('error', new Error('Spawn error')); + resolve(); + }, 0); + + handleAutoUpdate(mockUpdateInfo, mockSettings, '/root', mockSpawn); + }); + + expect(mockUpdateEventEmitter.emit).toHaveBeenCalledWith('update-failed', { + message: + 'Automatic update failed. Please try updating manually. (error: Spawn error)', + }); + }); + + it('should use the "@nightly" tag for nightly updates', async () => { + mockUpdateInfo.update.latest = '2.0.0-nightly'; + mockGetInstallationInfo.mockReturnValue({ + updateCommand: 'npm i -g @qwen-code/qwen-code@latest', + updateMessage: 'This is an additional message.', + isGlobal: false, + packageManager: PackageManager.NPM, + }); + + handleAutoUpdate(mockUpdateInfo, mockSettings, '/root', mockSpawn); + + expect(mockSpawn).toHaveBeenCalledWith( + 'npm i -g @qwen-code/qwen-code@nightly', + { + shell: true, + stdio: 'pipe', + }, + ); + }); + + it('should emit "update-success" when the update process succeeds', async () => { + await new Promise((resolve) => { + mockGetInstallationInfo.mockReturnValue({ + updateCommand: 'npm i -g @qwen-code/qwen-code@latest', + updateMessage: 'This is an additional message.', + isGlobal: false, + packageManager: PackageManager.NPM, + }); + + // Simulate successful execution + setTimeout(() => { + mockChildProcess.emit('close', 0); + resolve(); + }, 0); + + handleAutoUpdate(mockUpdateInfo, mockSettings, '/root', mockSpawn); + }); + + expect(mockUpdateEventEmitter.emit).toHaveBeenCalledWith('update-success', { + message: + 'Update successful! The new version will be used on your next run.', + }); + }); +}); diff --git a/packages/cli/src/utils/handleAutoUpdate.ts b/packages/cli/src/utils/handleAutoUpdate.ts new file mode 100644 index 00000000..cbcdb2e0 --- /dev/null +++ b/packages/cli/src/utils/handleAutoUpdate.ts @@ -0,0 +1,145 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import { UpdateObject } from '../ui/utils/updateCheck.js'; +import { LoadedSettings } from '../config/settings.js'; +import { getInstallationInfo } from './installationInfo.js'; +import { updateEventEmitter } from './updateEventEmitter.js'; +import { HistoryItem, MessageType } from '../ui/types.js'; +import { spawnWrapper } from './spawnWrapper.js'; +import { spawn } from 'child_process'; + +export function handleAutoUpdate( + info: UpdateObject | null, + settings: LoadedSettings, + projectRoot: string, + spawnFn: typeof spawn = spawnWrapper, +) { + if (!info) { + return; + } + + if (settings.merged.disableUpdateNag) { + return; + } + + const installationInfo = getInstallationInfo( + projectRoot, + settings.merged.disableAutoUpdate ?? false, + ); + + let combinedMessage = info.message; + if (installationInfo.updateMessage) { + combinedMessage += `\n${installationInfo.updateMessage}`; + } + + updateEventEmitter.emit('update-received', { + message: combinedMessage, + }); + + if (!installationInfo.updateCommand || settings.merged.disableAutoUpdate) { + return; + } + const isNightly = info.update.latest.includes('nightly'); + + const updateCommand = installationInfo.updateCommand.replace( + '@latest', + isNightly ? '@nightly' : `@${info.update.latest}`, + ); + const updateProcess = spawnFn(updateCommand, { stdio: 'pipe', shell: true }); + let errorOutput = ''; + updateProcess.stderr.on('data', (data) => { + errorOutput += data.toString(); + }); + + updateProcess.on('close', (code) => { + if (code === 0) { + updateEventEmitter.emit('update-success', { + message: + 'Update successful! The new version will be used on your next run.', + }); + } else { + updateEventEmitter.emit('update-failed', { + message: `Automatic update failed. Please try updating manually. (command: ${updateCommand}, stderr: ${errorOutput.trim()})`, + }); + } + }); + + updateProcess.on('error', (err) => { + updateEventEmitter.emit('update-failed', { + message: `Automatic update failed. Please try updating manually. (error: ${err.message})`, + }); + }); + return updateProcess; +} + +export function setUpdateHandler( + addItem: (item: Omit, timestamp: number) => void, + setUpdateInfo: (info: UpdateObject | null) => void, +) { + let successfullyInstalled = false; + const handleUpdateRecieved = (info: UpdateObject) => { + setUpdateInfo(info); + const savedMessage = info.message; + setTimeout(() => { + if (!successfullyInstalled) { + addItem( + { + type: MessageType.INFO, + text: savedMessage, + }, + Date.now(), + ); + } + setUpdateInfo(null); + }, 60000); + }; + + const handleUpdateFailed = () => { + setUpdateInfo(null); + addItem( + { + type: MessageType.ERROR, + text: `Automatic update failed. Please try updating manually`, + }, + Date.now(), + ); + }; + + const handleUpdateSuccess = () => { + successfullyInstalled = true; + setUpdateInfo(null); + addItem( + { + type: MessageType.INFO, + text: `Update successful! The new version will be used on your next run.`, + }, + Date.now(), + ); + }; + + const handleUpdateInfo = (data: { message: string }) => { + addItem( + { + type: MessageType.INFO, + text: data.message, + }, + Date.now(), + ); + }; + + updateEventEmitter.on('update-received', handleUpdateRecieved); + updateEventEmitter.on('update-failed', handleUpdateFailed); + updateEventEmitter.on('update-success', handleUpdateSuccess); + updateEventEmitter.on('update-info', handleUpdateInfo); + + return () => { + updateEventEmitter.off('update-received', handleUpdateRecieved); + updateEventEmitter.off('update-failed', handleUpdateFailed); + updateEventEmitter.off('update-success', handleUpdateSuccess); + updateEventEmitter.off('update-info', handleUpdateInfo); + }; +} diff --git a/packages/cli/src/utils/installationInfo.test.ts b/packages/cli/src/utils/installationInfo.test.ts new file mode 100644 index 00000000..39cae322 --- /dev/null +++ b/packages/cli/src/utils/installationInfo.test.ts @@ -0,0 +1,315 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import { vi, describe, it, expect, beforeEach, afterEach } from 'vitest'; +import { getInstallationInfo, PackageManager } from './installationInfo.js'; +import * as fs from 'fs'; +import * as path from 'path'; +import * as childProcess from 'child_process'; +import { isGitRepository } from '@qwen-code/qwen-code-core'; + +vi.mock('@qwen-code/qwen-code-core', () => ({ + isGitRepository: vi.fn(), +})); + +vi.mock('fs', async (importOriginal) => { + const actualFs = await importOriginal(); + return { + ...actualFs, + realpathSync: vi.fn(), + existsSync: vi.fn(), + }; +}); + +vi.mock('child_process', async (importOriginal) => { + const actual = await importOriginal(); + return { + ...actual, + execSync: vi.fn(), + }; +}); + +const mockedIsGitRepository = vi.mocked(isGitRepository); +const mockedRealPathSync = vi.mocked(fs.realpathSync); +const mockedExistsSync = vi.mocked(fs.existsSync); +const mockedExecSync = vi.mocked(childProcess.execSync); + +describe('getInstallationInfo', () => { + const projectRoot = '/path/to/project'; + let originalArgv: string[]; + + beforeEach(() => { + vi.resetAllMocks(); + originalArgv = [...process.argv]; + // Mock process.cwd() for isGitRepository + vi.spyOn(process, 'cwd').mockReturnValue(projectRoot); + }); + + afterEach(() => { + process.argv = originalArgv; + }); + + it('should return UNKNOWN when cliPath is not available', () => { + process.argv[1] = ''; + const info = getInstallationInfo(projectRoot, false); + expect(info.packageManager).toBe(PackageManager.UNKNOWN); + }); + + it('should return UNKNOWN and log error if realpathSync fails', () => { + const consoleSpy = vi.spyOn(console, 'log').mockImplementation(() => {}); + process.argv[1] = '/path/to/cli'; + const error = new Error('realpath failed'); + mockedRealPathSync.mockImplementation(() => { + throw error; + }); + + const info = getInstallationInfo(projectRoot, false); + + expect(info.packageManager).toBe(PackageManager.UNKNOWN); + expect(consoleSpy).toHaveBeenCalledWith(error); + consoleSpy.mockRestore(); + }); + + it('should detect running from a local git clone', () => { + process.argv[1] = `${projectRoot}/packages/cli/dist/index.js`; + mockedRealPathSync.mockReturnValue( + `${projectRoot}/packages/cli/dist/index.js`, + ); + mockedIsGitRepository.mockReturnValue(true); + + const info = getInstallationInfo(projectRoot, false); + + expect(info.packageManager).toBe(PackageManager.UNKNOWN); + expect(info.isGlobal).toBe(false); + expect(info.updateMessage).toBe( + 'Running from a local git clone. Please update with "git pull".', + ); + }); + + it('should detect running via npx', () => { + const npxPath = `/Users/test/.npm/_npx/12345/bin/gemini`; + process.argv[1] = npxPath; + mockedRealPathSync.mockReturnValue(npxPath); + + const info = getInstallationInfo(projectRoot, false); + + expect(info.packageManager).toBe(PackageManager.NPX); + expect(info.isGlobal).toBe(false); + expect(info.updateMessage).toBe('Running via npx, update not applicable.'); + }); + + it('should detect running via pnpx', () => { + const pnpxPath = `/Users/test/.pnpm/_pnpx/12345/bin/gemini`; + process.argv[1] = pnpxPath; + mockedRealPathSync.mockReturnValue(pnpxPath); + + const info = getInstallationInfo(projectRoot, false); + + expect(info.packageManager).toBe(PackageManager.PNPX); + expect(info.isGlobal).toBe(false); + expect(info.updateMessage).toBe('Running via pnpx, update not applicable.'); + }); + + it('should detect running via bunx', () => { + const bunxPath = `/Users/test/.bun/install/cache/12345/bin/gemini`; + process.argv[1] = bunxPath; + mockedRealPathSync.mockReturnValue(bunxPath); + mockedExecSync.mockImplementation(() => { + throw new Error('Command failed'); + }); + + const info = getInstallationInfo(projectRoot, false); + + expect(info.packageManager).toBe(PackageManager.BUNX); + expect(info.isGlobal).toBe(false); + expect(info.updateMessage).toBe('Running via bunx, update not applicable.'); + }); + + it('should detect Homebrew installation via execSync', () => { + Object.defineProperty(process, 'platform', { + value: 'darwin', + }); + const cliPath = '/usr/local/bin/gemini'; + process.argv[1] = cliPath; + mockedRealPathSync.mockReturnValue(cliPath); + mockedExecSync.mockReturnValue(Buffer.from('gemini-cli')); // Simulate successful command + + const info = getInstallationInfo(projectRoot, false); + + expect(mockedExecSync).toHaveBeenCalledWith( + 'brew list -1 | grep -q "^gemini-cli$"', + { stdio: 'ignore' }, + ); + expect(info.packageManager).toBe(PackageManager.HOMEBREW); + expect(info.isGlobal).toBe(true); + expect(info.updateMessage).toContain('brew upgrade'); + }); + + it('should fall through if brew command fails', () => { + Object.defineProperty(process, 'platform', { + value: 'darwin', + }); + const cliPath = '/usr/local/bin/gemini'; + process.argv[1] = cliPath; + mockedRealPathSync.mockReturnValue(cliPath); + mockedExecSync.mockImplementation(() => { + throw new Error('Command failed'); + }); + + const info = getInstallationInfo(projectRoot, false); + + expect(mockedExecSync).toHaveBeenCalledWith( + 'brew list -1 | grep -q "^gemini-cli$"', + { stdio: 'ignore' }, + ); + // Should fall back to default global npm + expect(info.packageManager).toBe(PackageManager.NPM); + expect(info.isGlobal).toBe(true); + }); + + it('should detect global pnpm installation', () => { + const pnpmPath = `/Users/test/.pnpm/global/5/node_modules/.pnpm/some-hash/node_modules/@qwen-code/qwen-code/dist/index.js`; + process.argv[1] = pnpmPath; + mockedRealPathSync.mockReturnValue(pnpmPath); + mockedExecSync.mockImplementation(() => { + throw new Error('Command failed'); + }); + + const info = getInstallationInfo(projectRoot, false); + expect(info.packageManager).toBe(PackageManager.PNPM); + expect(info.isGlobal).toBe(true); + expect(info.updateCommand).toBe('pnpm add -g @qwen-code/qwen-code@latest'); + expect(info.updateMessage).toContain('Attempting to automatically update'); + + const infoDisabled = getInstallationInfo(projectRoot, true); + expect(infoDisabled.updateMessage).toContain('Please run pnpm add'); + }); + + it('should detect global yarn installation', () => { + const yarnPath = `/Users/test/.yarn/global/node_modules/@qwen-code/qwen-code/dist/index.js`; + process.argv[1] = yarnPath; + mockedRealPathSync.mockReturnValue(yarnPath); + mockedExecSync.mockImplementation(() => { + throw new Error('Command failed'); + }); + + const info = getInstallationInfo(projectRoot, false); + expect(info.packageManager).toBe(PackageManager.YARN); + expect(info.isGlobal).toBe(true); + expect(info.updateCommand).toBe( + 'yarn global add @qwen-code/qwen-code@latest', + ); + expect(info.updateMessage).toContain('Attempting to automatically update'); + + const infoDisabled = getInstallationInfo(projectRoot, true); + expect(infoDisabled.updateMessage).toContain('Please run yarn global add'); + }); + + it('should detect global bun installation', () => { + const bunPath = `/Users/test/.bun/bin/gemini`; + process.argv[1] = bunPath; + mockedRealPathSync.mockReturnValue(bunPath); + mockedExecSync.mockImplementation(() => { + throw new Error('Command failed'); + }); + + const info = getInstallationInfo(projectRoot, false); + expect(info.packageManager).toBe(PackageManager.BUN); + expect(info.isGlobal).toBe(true); + expect(info.updateCommand).toBe('bun add -g @qwen-code/qwen-code@latest'); + expect(info.updateMessage).toContain('Attempting to automatically update'); + + const infoDisabled = getInstallationInfo(projectRoot, true); + expect(infoDisabled.updateMessage).toContain('Please run bun add'); + }); + + it('should detect local installation and identify yarn from lockfile', () => { + const localPath = `${projectRoot}/node_modules/.bin/gemini`; + process.argv[1] = localPath; + mockedRealPathSync.mockReturnValue(localPath); + mockedExecSync.mockImplementation(() => { + throw new Error('Command failed'); + }); + mockedExistsSync.mockImplementation( + (p) => p === path.join(projectRoot, 'yarn.lock'), + ); + + const info = getInstallationInfo(projectRoot, false); + + expect(info.packageManager).toBe(PackageManager.YARN); + expect(info.isGlobal).toBe(false); + expect(info.updateMessage).toContain('Locally installed'); + }); + + it('should detect local installation and identify pnpm from lockfile', () => { + const localPath = `${projectRoot}/node_modules/.bin/gemini`; + process.argv[1] = localPath; + mockedRealPathSync.mockReturnValue(localPath); + mockedExecSync.mockImplementation(() => { + throw new Error('Command failed'); + }); + mockedExistsSync.mockImplementation( + (p) => p === path.join(projectRoot, 'pnpm-lock.yaml'), + ); + + const info = getInstallationInfo(projectRoot, false); + + expect(info.packageManager).toBe(PackageManager.PNPM); + expect(info.isGlobal).toBe(false); + }); + + it('should detect local installation and identify bun from lockfile', () => { + const localPath = `${projectRoot}/node_modules/.bin/gemini`; + process.argv[1] = localPath; + mockedRealPathSync.mockReturnValue(localPath); + mockedExecSync.mockImplementation(() => { + throw new Error('Command failed'); + }); + mockedExistsSync.mockImplementation( + (p) => p === path.join(projectRoot, 'bun.lockb'), + ); + + const info = getInstallationInfo(projectRoot, false); + + expect(info.packageManager).toBe(PackageManager.BUN); + expect(info.isGlobal).toBe(false); + }); + + it('should default to local npm installation if no lockfile is found', () => { + const localPath = `${projectRoot}/node_modules/.bin/gemini`; + process.argv[1] = localPath; + mockedRealPathSync.mockReturnValue(localPath); + mockedExecSync.mockImplementation(() => { + throw new Error('Command failed'); + }); + mockedExistsSync.mockReturnValue(false); // No lockfiles + + const info = getInstallationInfo(projectRoot, false); + + expect(info.packageManager).toBe(PackageManager.NPM); + expect(info.isGlobal).toBe(false); + }); + + it('should default to global npm installation for unrecognized paths', () => { + const globalPath = `/usr/local/bin/gemini`; + process.argv[1] = globalPath; + mockedRealPathSync.mockReturnValue(globalPath); + mockedExecSync.mockImplementation(() => { + throw new Error('Command failed'); + }); + + const info = getInstallationInfo(projectRoot, false); + expect(info.packageManager).toBe(PackageManager.NPM); + expect(info.isGlobal).toBe(true); + expect(info.updateCommand).toBe( + 'npm install -g @qwen-code/qwen-code@latest', + ); + expect(info.updateMessage).toContain('Attempting to automatically update'); + + const infoDisabled = getInstallationInfo(projectRoot, true); + expect(infoDisabled.updateMessage).toContain('Please run npm install'); + }); +}); diff --git a/packages/cli/src/utils/installationInfo.ts b/packages/cli/src/utils/installationInfo.ts new file mode 100644 index 00000000..8097f56a --- /dev/null +++ b/packages/cli/src/utils/installationInfo.ts @@ -0,0 +1,177 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import { isGitRepository } from '@qwen-code/qwen-code-core'; +import * as fs from 'fs'; +import * as path from 'path'; +import * as childProcess from 'child_process'; + +export enum PackageManager { + NPM = 'npm', + YARN = 'yarn', + PNPM = 'pnpm', + PNPX = 'pnpx', + BUN = 'bun', + BUNX = 'bunx', + HOMEBREW = 'homebrew', + NPX = 'npx', + UNKNOWN = 'unknown', +} + +export interface InstallationInfo { + packageManager: PackageManager; + isGlobal: boolean; + updateCommand?: string; + updateMessage?: string; +} + +export function getInstallationInfo( + projectRoot: string, + isAutoUpdateDisabled: boolean, +): InstallationInfo { + const cliPath = process.argv[1]; + if (!cliPath) { + return { packageManager: PackageManager.UNKNOWN, isGlobal: false }; + } + + try { + // Normalize path separators to forward slashes for consistent matching. + const realPath = fs.realpathSync(cliPath).replace(/\\/g, '/'); + const normalizedProjectRoot = projectRoot?.replace(/\\/g, '/'); + const isGit = isGitRepository(process.cwd()); + + // Check for local git clone first + if ( + isGit && + normalizedProjectRoot && + realPath.startsWith(normalizedProjectRoot) && + !realPath.includes('/node_modules/') + ) { + return { + packageManager: PackageManager.UNKNOWN, // Not managed by a package manager in this sense + isGlobal: false, + updateMessage: + 'Running from a local git clone. Please update with "git pull".', + }; + } + + // Check for npx/pnpx + if (realPath.includes('/.npm/_npx') || realPath.includes('/npm/_npx')) { + return { + packageManager: PackageManager.NPX, + isGlobal: false, + updateMessage: 'Running via npx, update not applicable.', + }; + } + if (realPath.includes('/.pnpm/_pnpx')) { + return { + packageManager: PackageManager.PNPX, + isGlobal: false, + updateMessage: 'Running via pnpx, update not applicable.', + }; + } + + // Check for Homebrew + if (process.platform === 'darwin') { + try { + // The package name in homebrew is gemini-cli + childProcess.execSync('brew list -1 | grep -q "^gemini-cli$"', { + stdio: 'ignore', + }); + return { + packageManager: PackageManager.HOMEBREW, + isGlobal: true, + updateMessage: + 'Installed via Homebrew. Please update with "brew upgrade".', + }; + } catch (_error) { + // Brew is not installed or gemini-cli is not installed via brew. + // Continue to the next check. + } + } + + // Check for pnpm + if (realPath.includes('/.pnpm/global')) { + const updateCommand = 'pnpm add -g @qwen-code/qwen-code@latest'; + return { + packageManager: PackageManager.PNPM, + isGlobal: true, + updateCommand, + updateMessage: isAutoUpdateDisabled + ? `Please run ${updateCommand} to update` + : 'Installed with pnpm. Attempting to automatically update now...', + }; + } + + // Check for yarn + if (realPath.includes('/.yarn/global')) { + const updateCommand = 'yarn global add @qwen-code/qwen-code@latest'; + return { + packageManager: PackageManager.YARN, + isGlobal: true, + updateCommand, + updateMessage: isAutoUpdateDisabled + ? `Please run ${updateCommand} to update` + : 'Installed with yarn. Attempting to automatically update now...', + }; + } + + // Check for bun + if (realPath.includes('/.bun/install/cache')) { + return { + packageManager: PackageManager.BUNX, + isGlobal: false, + updateMessage: 'Running via bunx, update not applicable.', + }; + } + if (realPath.includes('/.bun/bin')) { + const updateCommand = 'bun add -g @qwen-code/qwen-code@latest'; + return { + packageManager: PackageManager.BUN, + isGlobal: true, + updateCommand, + updateMessage: isAutoUpdateDisabled + ? `Please run ${updateCommand} to update` + : 'Installed with bun. Attempting to automatically update now...', + }; + } + + // Check for local install + if ( + normalizedProjectRoot && + realPath.startsWith(`${normalizedProjectRoot}/node_modules`) + ) { + let pm = PackageManager.NPM; + if (fs.existsSync(path.join(projectRoot, 'yarn.lock'))) { + pm = PackageManager.YARN; + } else if (fs.existsSync(path.join(projectRoot, 'pnpm-lock.yaml'))) { + pm = PackageManager.PNPM; + } else if (fs.existsSync(path.join(projectRoot, 'bun.lockb'))) { + pm = PackageManager.BUN; + } + return { + packageManager: pm, + isGlobal: false, + updateMessage: + "Locally installed. Please update via your project's package.json.", + }; + } + + // Assume global npm + const updateCommand = 'npm install -g @qwen-code/qwen-code@latest'; + return { + packageManager: PackageManager.NPM, + isGlobal: true, + updateCommand, + updateMessage: isAutoUpdateDisabled + ? `Please run ${updateCommand} to update` + : 'Installed with npm. Attempting to automatically update now...', + }; + } catch (error) { + console.log(error); + return { packageManager: PackageManager.UNKNOWN, isGlobal: false }; + } +} diff --git a/packages/cli/src/utils/sandbox-macos-permissive-closed.sb b/packages/cli/src/utils/sandbox-macos-permissive-closed.sb index 36d88995..cf64da94 100644 --- a/packages/cli/src/utils/sandbox-macos-permissive-closed.sb +++ b/packages/cli/src/utils/sandbox-macos-permissive-closed.sb @@ -13,6 +13,12 @@ (subpath (string-append (param "HOME_DIR") "/.npm")) (subpath (string-append (param "HOME_DIR") "/.cache")) (subpath (string-append (param "HOME_DIR") "/.gitconfig")) + ;; Allow writes to included directories from --include-directories + (subpath (param "INCLUDE_DIR_0")) + (subpath (param "INCLUDE_DIR_1")) + (subpath (param "INCLUDE_DIR_2")) + (subpath (param "INCLUDE_DIR_3")) + (subpath (param "INCLUDE_DIR_4")) (literal "/dev/stdout") (literal "/dev/stderr") (literal "/dev/null") diff --git a/packages/cli/src/utils/sandbox-macos-permissive-open.sb b/packages/cli/src/utils/sandbox-macos-permissive-open.sb index 552efcd4..50d21a1f 100644 --- a/packages/cli/src/utils/sandbox-macos-permissive-open.sb +++ b/packages/cli/src/utils/sandbox-macos-permissive-open.sb @@ -13,6 +13,12 @@ (subpath (string-append (param "HOME_DIR") "/.npm")) (subpath (string-append (param "HOME_DIR") "/.cache")) (subpath (string-append (param "HOME_DIR") "/.gitconfig")) + ;; Allow writes to included directories from --include-directories + (subpath (param "INCLUDE_DIR_0")) + (subpath (param "INCLUDE_DIR_1")) + (subpath (param "INCLUDE_DIR_2")) + (subpath (param "INCLUDE_DIR_3")) + (subpath (param "INCLUDE_DIR_4")) (literal "/dev/stdout") (literal "/dev/stderr") (literal "/dev/null") diff --git a/packages/cli/src/utils/sandbox-macos-permissive-proxied.sb b/packages/cli/src/utils/sandbox-macos-permissive-proxied.sb index 4410776b..8becc8cb 100644 --- a/packages/cli/src/utils/sandbox-macos-permissive-proxied.sb +++ b/packages/cli/src/utils/sandbox-macos-permissive-proxied.sb @@ -13,6 +13,12 @@ (subpath (string-append (param "HOME_DIR") "/.npm")) (subpath (string-append (param "HOME_DIR") "/.cache")) (subpath (string-append (param "HOME_DIR") "/.gitconfig")) + ;; Allow writes to included directories from --include-directories + (subpath (param "INCLUDE_DIR_0")) + (subpath (param "INCLUDE_DIR_1")) + (subpath (param "INCLUDE_DIR_2")) + (subpath (param "INCLUDE_DIR_3")) + (subpath (param "INCLUDE_DIR_4")) (literal "/dev/stdout") (literal "/dev/stderr") (literal "/dev/null") diff --git a/packages/cli/src/utils/sandbox-macos-restrictive-closed.sb b/packages/cli/src/utils/sandbox-macos-restrictive-closed.sb index 9ce68e9d..17d0c073 100644 --- a/packages/cli/src/utils/sandbox-macos-restrictive-closed.sb +++ b/packages/cli/src/utils/sandbox-macos-restrictive-closed.sb @@ -71,6 +71,12 @@ (subpath (string-append (param "HOME_DIR") "/.npm")) (subpath (string-append (param "HOME_DIR") "/.cache")) (subpath (string-append (param "HOME_DIR") "/.gitconfig")) + ;; Allow writes to included directories from --include-directories + (subpath (param "INCLUDE_DIR_0")) + (subpath (param "INCLUDE_DIR_1")) + (subpath (param "INCLUDE_DIR_2")) + (subpath (param "INCLUDE_DIR_3")) + (subpath (param "INCLUDE_DIR_4")) (literal "/dev/stdout") (literal "/dev/stderr") (literal "/dev/null") diff --git a/packages/cli/src/utils/sandbox-macos-restrictive-open.sb b/packages/cli/src/utils/sandbox-macos-restrictive-open.sb index e89b8090..17f27224 100644 --- a/packages/cli/src/utils/sandbox-macos-restrictive-open.sb +++ b/packages/cli/src/utils/sandbox-macos-restrictive-open.sb @@ -71,6 +71,12 @@ (subpath (string-append (param "HOME_DIR") "/.npm")) (subpath (string-append (param "HOME_DIR") "/.cache")) (subpath (string-append (param "HOME_DIR") "/.gitconfig")) + ;; Allow writes to included directories from --include-directories + (subpath (param "INCLUDE_DIR_0")) + (subpath (param "INCLUDE_DIR_1")) + (subpath (param "INCLUDE_DIR_2")) + (subpath (param "INCLUDE_DIR_3")) + (subpath (param "INCLUDE_DIR_4")) (literal "/dev/stdout") (literal "/dev/stderr") (literal "/dev/null") diff --git a/packages/cli/src/utils/sandbox-macos-restrictive-proxied.sb b/packages/cli/src/utils/sandbox-macos-restrictive-proxied.sb index a49712a3..c07c1496 100644 --- a/packages/cli/src/utils/sandbox-macos-restrictive-proxied.sb +++ b/packages/cli/src/utils/sandbox-macos-restrictive-proxied.sb @@ -71,6 +71,12 @@ (subpath (string-append (param "HOME_DIR") "/.npm")) (subpath (string-append (param "HOME_DIR") "/.cache")) (subpath (string-append (param "HOME_DIR") "/.gitconfig")) + ;; Allow writes to included directories from --include-directories + (subpath (param "INCLUDE_DIR_0")) + (subpath (param "INCLUDE_DIR_1")) + (subpath (param "INCLUDE_DIR_2")) + (subpath (param "INCLUDE_DIR_3")) + (subpath (param "INCLUDE_DIR_4")) (literal "/dev/stdout") (literal "/dev/stderr") (literal "/dev/null") diff --git a/packages/cli/src/utils/sandbox.ts b/packages/cli/src/utils/sandbox.ts index 7dfbc557..d99e63bd 100644 --- a/packages/cli/src/utils/sandbox.ts +++ b/packages/cli/src/utils/sandbox.ts @@ -9,13 +9,13 @@ import os from 'node:os'; import path from 'node:path'; import fs from 'node:fs'; import { readFile } from 'node:fs/promises'; -import { quote } from 'shell-quote'; +import { quote, parse } from 'shell-quote'; import { USER_SETTINGS_DIR, SETTINGS_DIRECTORY_NAME, } from '../config/settings.js'; import { promisify } from 'util'; -import { SandboxConfig } from '@qwen-code/qwen-code-core'; +import { Config, SandboxConfig } from '@qwen-code/qwen-code-core'; const execAsync = promisify(exec); @@ -183,6 +183,7 @@ function entrypoint(workdir: string): string[] { export async function start_sandbox( config: SandboxConfig, nodeArgs: string[] = [], + cliConfig?: Config, ) { if (config.command === 'sandbox-exec') { // disallow BUILD_SANDBOX @@ -223,6 +224,38 @@ export async function start_sandbox( `HOME_DIR=${fs.realpathSync(os.homedir())}`, '-D', `CACHE_DIR=${fs.realpathSync(execSync(`getconf DARWIN_USER_CACHE_DIR`).toString().trim())}`, + ]; + + // Add included directories from the workspace context + // Always add 5 INCLUDE_DIR parameters to ensure .sb files can reference them + const MAX_INCLUDE_DIRS = 5; + const targetDir = fs.realpathSync(cliConfig?.getTargetDir() || ''); + const includedDirs: string[] = []; + + if (cliConfig) { + const workspaceContext = cliConfig.getWorkspaceContext(); + const directories = workspaceContext.getDirectories(); + + // Filter out TARGET_DIR + for (const dir of directories) { + const realDir = fs.realpathSync(dir); + if (realDir !== targetDir) { + includedDirs.push(realDir); + } + } + } + + for (let i = 0; i < MAX_INCLUDE_DIRS; i++) { + let dirPath = '/dev/null'; // Default to a safe path that won't cause issues + + if (i < includedDirs.length) { + dirPath = includedDirs[i]; + } + + args.push('-D', `INCLUDE_DIR_${i}=${dirPath}`); + } + + args.push( '-f', profileFile, 'sh', @@ -232,7 +265,7 @@ export async function start_sandbox( `NODE_OPTIONS="${nodeOptions}"`, ...process.argv.map((arg) => quote([arg])), ].join(' '), - ]; + ); // start and set up proxy if GEMINI_SANDBOX_PROXY_COMMAND is set const proxyCommand = process.env.GEMINI_SANDBOX_PROXY_COMMAND; let proxyProcess: ChildProcess | undefined = undefined; @@ -366,6 +399,14 @@ export async function start_sandbox( // run init binary inside container to forward signals & reap zombies const args = ['run', '-i', '--rm', '--init', '--workdir', containerWorkdir]; + // add custom flags from SANDBOX_FLAGS + if (process.env.SANDBOX_FLAGS) { + const flags = parse(process.env.SANDBOX_FLAGS, process.env).filter( + (f): f is string => typeof f === 'string', + ); + args.push(...flags); + } + // add TTY only if stdin is TTY as well, i.e. for piped input don't init TTY in container if (process.stdin.isTTY) { args.push('-t'); diff --git a/packages/cli/src/utils/spawnWrapper.ts b/packages/cli/src/utils/spawnWrapper.ts new file mode 100644 index 00000000..3f3cca94 --- /dev/null +++ b/packages/cli/src/utils/spawnWrapper.ts @@ -0,0 +1,9 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import { spawn } from 'child_process'; + +export const spawnWrapper = spawn; diff --git a/packages/cli/src/utils/updateEventEmitter.ts b/packages/cli/src/utils/updateEventEmitter.ts new file mode 100644 index 00000000..a60ef039 --- /dev/null +++ b/packages/cli/src/utils/updateEventEmitter.ts @@ -0,0 +1,13 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import { EventEmitter } from 'events'; + +/** + * A shared event emitter for application-wide communication + * between decoupled parts of the CLI. + */ +export const updateEventEmitter = new EventEmitter(); diff --git a/packages/cli/src/validateNonInterActiveAuth.test.ts b/packages/cli/src/validateNonInterActiveAuth.test.ts index 307b20d4..06c3d0b5 100644 --- a/packages/cli/src/validateNonInterActiveAuth.test.ts +++ b/packages/cli/src/validateNonInterActiveAuth.test.ts @@ -10,6 +10,7 @@ import { NonInteractiveConfig, } from './validateNonInterActiveAuth.js'; import { AuthType } from '@qwen-code/qwen-code-core'; +import * as auth from './config/auth.js'; describe('validateNonInterActiveAuth', () => { let originalEnvGeminiApiKey: string | undefined; @@ -67,7 +68,11 @@ describe('validateNonInterActiveAuth', () => { refreshAuth: refreshAuthMock, }; try { - await validateNonInteractiveAuth(undefined, nonInteractiveConfig); + await validateNonInteractiveAuth( + undefined, + undefined, + nonInteractiveConfig, + ); expect.fail('Should have exited'); } catch (e) { expect((e as Error).message).toContain('process.exit(1) called'); @@ -83,7 +88,11 @@ describe('validateNonInterActiveAuth', () => { const nonInteractiveConfig: NonInteractiveConfig = { refreshAuth: refreshAuthMock, }; - await validateNonInteractiveAuth(undefined, nonInteractiveConfig); + await validateNonInteractiveAuth( + undefined, + undefined, + nonInteractiveConfig, + ); expect(refreshAuthMock).toHaveBeenCalledWith(AuthType.LOGIN_WITH_GOOGLE); }); @@ -92,7 +101,11 @@ describe('validateNonInterActiveAuth', () => { const nonInteractiveConfig: NonInteractiveConfig = { refreshAuth: refreshAuthMock, }; - await validateNonInteractiveAuth(undefined, nonInteractiveConfig); + await validateNonInteractiveAuth( + undefined, + undefined, + nonInteractiveConfig, + ); expect(refreshAuthMock).toHaveBeenCalledWith(AuthType.USE_GEMINI); }); @@ -101,7 +114,11 @@ describe('validateNonInterActiveAuth', () => { const nonInteractiveConfig: NonInteractiveConfig = { refreshAuth: refreshAuthMock, }; - await validateNonInteractiveAuth(undefined, nonInteractiveConfig); + await validateNonInteractiveAuth( + undefined, + undefined, + nonInteractiveConfig, + ); expect(refreshAuthMock).toHaveBeenCalledWith(AuthType.USE_OPENAI); }); @@ -112,7 +129,11 @@ describe('validateNonInterActiveAuth', () => { const nonInteractiveConfig: NonInteractiveConfig = { refreshAuth: refreshAuthMock, }; - await validateNonInteractiveAuth(undefined, nonInteractiveConfig); + await validateNonInteractiveAuth( + undefined, + undefined, + nonInteractiveConfig, + ); expect(refreshAuthMock).toHaveBeenCalledWith(AuthType.USE_VERTEX_AI); }); @@ -122,7 +143,11 @@ describe('validateNonInterActiveAuth', () => { const nonInteractiveConfig: NonInteractiveConfig = { refreshAuth: refreshAuthMock, }; - await validateNonInteractiveAuth(undefined, nonInteractiveConfig); + await validateNonInteractiveAuth( + undefined, + undefined, + nonInteractiveConfig, + ); expect(refreshAuthMock).toHaveBeenCalledWith(AuthType.USE_VERTEX_AI); }); @@ -135,7 +160,11 @@ describe('validateNonInterActiveAuth', () => { const nonInteractiveConfig: NonInteractiveConfig = { refreshAuth: refreshAuthMock, }; - await validateNonInteractiveAuth(undefined, nonInteractiveConfig); + await validateNonInteractiveAuth( + undefined, + undefined, + nonInteractiveConfig, + ); expect(refreshAuthMock).toHaveBeenCalledWith(AuthType.LOGIN_WITH_GOOGLE); }); @@ -147,7 +176,11 @@ describe('validateNonInterActiveAuth', () => { const nonInteractiveConfig: NonInteractiveConfig = { refreshAuth: refreshAuthMock, }; - await validateNonInteractiveAuth(undefined, nonInteractiveConfig); + await validateNonInteractiveAuth( + undefined, + undefined, + nonInteractiveConfig, + ); expect(refreshAuthMock).toHaveBeenCalledWith(AuthType.USE_VERTEX_AI); }); @@ -159,7 +192,11 @@ describe('validateNonInterActiveAuth', () => { const nonInteractiveConfig: NonInteractiveConfig = { refreshAuth: refreshAuthMock, }; - await validateNonInteractiveAuth(undefined, nonInteractiveConfig); + await validateNonInteractiveAuth( + undefined, + undefined, + nonInteractiveConfig, + ); expect(refreshAuthMock).toHaveBeenCalledWith(AuthType.USE_GEMINI); }); @@ -169,20 +206,24 @@ describe('validateNonInterActiveAuth', () => { const nonInteractiveConfig: NonInteractiveConfig = { refreshAuth: refreshAuthMock, }; - await validateNonInteractiveAuth(AuthType.USE_GEMINI, nonInteractiveConfig); + await validateNonInteractiveAuth( + AuthType.USE_GEMINI, + undefined, + nonInteractiveConfig, + ); expect(refreshAuthMock).toHaveBeenCalledWith(AuthType.USE_GEMINI); }); it('exits if validateAuthMethod returns error', async () => { // Mock validateAuthMethod to return error - const mod = await import('./config/auth.js'); - vi.spyOn(mod, 'validateAuthMethod').mockReturnValue('Auth error!'); + vi.spyOn(auth, 'validateAuthMethod').mockReturnValue('Auth error!'); const nonInteractiveConfig: NonInteractiveConfig = { refreshAuth: refreshAuthMock, }; try { await validateNonInteractiveAuth( AuthType.USE_GEMINI, + undefined, nonInteractiveConfig, ); expect.fail('Should have exited'); @@ -192,4 +233,28 @@ describe('validateNonInterActiveAuth', () => { expect(consoleErrorSpy).toHaveBeenCalledWith('Auth error!'); expect(processExitSpy).toHaveBeenCalledWith(1); }); + + it('skips validation if useExternalAuth is true', async () => { + // Mock validateAuthMethod to return error to ensure it's not being called + const validateAuthMethodSpy = vi + .spyOn(auth, 'validateAuthMethod') + .mockReturnValue('Auth error!'); + const nonInteractiveConfig: NonInteractiveConfig = { + refreshAuth: refreshAuthMock, + }; + + // Even with an invalid auth type, it should not exit + // because validation is skipped. + await validateNonInteractiveAuth( + 'invalid-auth-type' as AuthType, + true, // useExternalAuth = true + nonInteractiveConfig, + ); + + expect(validateAuthMethodSpy).not.toHaveBeenCalled(); + expect(consoleErrorSpy).not.toHaveBeenCalled(); + expect(processExitSpy).not.toHaveBeenCalled(); + // We still expect refreshAuth to be called with the (invalid) type + expect(refreshAuthMock).toHaveBeenCalledWith('invalid-auth-type'); + }); }); diff --git a/packages/cli/src/validateNonInterActiveAuth.ts b/packages/cli/src/validateNonInterActiveAuth.ts index 0a974b0e..63a6166c 100644 --- a/packages/cli/src/validateNonInterActiveAuth.ts +++ b/packages/cli/src/validateNonInterActiveAuth.ts @@ -26,6 +26,7 @@ function getAuthTypeFromEnv(): AuthType | undefined { export async function validateNonInteractiveAuth( configuredAuthType: AuthType | undefined, + useExternalAuth: boolean | undefined, nonInteractiveConfig: Config, ) { const effectiveAuthType = configuredAuthType || getAuthTypeFromEnv(); @@ -37,10 +38,12 @@ export async function validateNonInteractiveAuth( process.exit(1); } - const err = validateAuthMethod(effectiveAuthType); - if (err != null) { - console.error(err); - process.exit(1); + if (!useExternalAuth) { + const err = validateAuthMethod(effectiveAuthType); + if (err != null) { + console.error(err); + process.exit(1); + } } await nonInteractiveConfig.refreshAuth(effectiveAuthType); diff --git a/packages/cli/test-setup.ts b/packages/cli/test-setup.ts new file mode 100644 index 00000000..a419c873 --- /dev/null +++ b/packages/cli/test-setup.ts @@ -0,0 +1,7 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import './src/test-utils/customMatchers.js'; diff --git a/packages/cli/vitest.config.ts b/packages/cli/vitest.config.ts index 8f67a0be..5a3f99fe 100644 --- a/packages/cli/vitest.config.ts +++ b/packages/cli/vitest.config.ts @@ -18,6 +18,7 @@ export default defineConfig({ outputFile: { junit: 'junit.xml', }, + setupFiles: ['./test-setup.ts'], coverage: { enabled: true, provider: 'v8', diff --git a/packages/core/package.json b/packages/core/package.json index 99affe1a..601ec790 100644 --- a/packages/core/package.json +++ b/packages/core/package.json @@ -38,6 +38,7 @@ "html-to-text": "^9.0.5", "https-proxy-agent": "^7.0.6", "ignore": "^7.0.0", + "marked": "^15.0.12", "micromatch": "^4.0.8", "open": "^10.1.2", "openai": "^5.7.0", diff --git a/packages/core/src/code_assist/converter.test.ts b/packages/core/src/code_assist/converter.test.ts index 03f388dc..3d3a8ef3 100644 --- a/packages/core/src/code_assist/converter.test.ts +++ b/packages/core/src/code_assist/converter.test.ts @@ -24,7 +24,12 @@ describe('converter', () => { model: 'gemini-pro', contents: [{ role: 'user', parts: [{ text: 'Hello' }] }], }; - const codeAssistReq = toGenerateContentRequest(genaiReq, 'my-project'); + const codeAssistReq = toGenerateContentRequest( + genaiReq, + 'my-prompt', + 'my-project', + 'my-session', + ); expect(codeAssistReq).toEqual({ model: 'gemini-pro', project: 'my-project', @@ -37,8 +42,9 @@ describe('converter', () => { labels: undefined, safetySettings: undefined, generationConfig: undefined, - session_id: undefined, + session_id: 'my-session', }, + user_prompt_id: 'my-prompt', }); }); @@ -47,7 +53,12 @@ describe('converter', () => { model: 'gemini-pro', contents: [{ role: 'user', parts: [{ text: 'Hello' }] }], }; - const codeAssistReq = toGenerateContentRequest(genaiReq); + const codeAssistReq = toGenerateContentRequest( + genaiReq, + 'my-prompt', + undefined, + 'my-session', + ); expect(codeAssistReq).toEqual({ model: 'gemini-pro', project: undefined, @@ -60,8 +71,9 @@ describe('converter', () => { labels: undefined, safetySettings: undefined, generationConfig: undefined, - session_id: undefined, + session_id: 'my-session', }, + user_prompt_id: 'my-prompt', }); }); @@ -72,6 +84,7 @@ describe('converter', () => { }; const codeAssistReq = toGenerateContentRequest( genaiReq, + 'my-prompt', 'my-project', 'session-123', ); @@ -89,6 +102,7 @@ describe('converter', () => { generationConfig: undefined, session_id: 'session-123', }, + user_prompt_id: 'my-prompt', }); }); @@ -97,7 +111,12 @@ describe('converter', () => { model: 'gemini-pro', contents: 'Hello', }; - const codeAssistReq = toGenerateContentRequest(genaiReq); + const codeAssistReq = toGenerateContentRequest( + genaiReq, + 'my-prompt', + 'my-project', + 'my-session', + ); expect(codeAssistReq.request.contents).toEqual([ { role: 'user', parts: [{ text: 'Hello' }] }, ]); @@ -108,7 +127,12 @@ describe('converter', () => { model: 'gemini-pro', contents: [{ text: 'Hello' }, { text: 'World' }], }; - const codeAssistReq = toGenerateContentRequest(genaiReq); + const codeAssistReq = toGenerateContentRequest( + genaiReq, + 'my-prompt', + 'my-project', + 'my-session', + ); expect(codeAssistReq.request.contents).toEqual([ { role: 'user', parts: [{ text: 'Hello' }] }, { role: 'user', parts: [{ text: 'World' }] }, @@ -123,7 +147,12 @@ describe('converter', () => { systemInstruction: 'You are a helpful assistant.', }, }; - const codeAssistReq = toGenerateContentRequest(genaiReq); + const codeAssistReq = toGenerateContentRequest( + genaiReq, + 'my-prompt', + 'my-project', + 'my-session', + ); expect(codeAssistReq.request.systemInstruction).toEqual({ role: 'user', parts: [{ text: 'You are a helpful assistant.' }], @@ -139,7 +168,12 @@ describe('converter', () => { topK: 40, }, }; - const codeAssistReq = toGenerateContentRequest(genaiReq); + const codeAssistReq = toGenerateContentRequest( + genaiReq, + 'my-prompt', + 'my-project', + 'my-session', + ); expect(codeAssistReq.request.generationConfig).toEqual({ temperature: 0.8, topK: 40, @@ -165,7 +199,12 @@ describe('converter', () => { responseMimeType: 'application/json', }, }; - const codeAssistReq = toGenerateContentRequest(genaiReq); + const codeAssistReq = toGenerateContentRequest( + genaiReq, + 'my-prompt', + 'my-project', + 'my-session', + ); expect(codeAssistReq.request.generationConfig).toEqual({ temperature: 0.1, topP: 0.2, diff --git a/packages/core/src/code_assist/converter.ts b/packages/core/src/code_assist/converter.ts index 8340cfc1..ffd471da 100644 --- a/packages/core/src/code_assist/converter.ts +++ b/packages/core/src/code_assist/converter.ts @@ -32,6 +32,7 @@ import { export interface CAGenerateContentRequest { model: string; project?: string; + user_prompt_id?: string; request: VertexGenerateContentRequest; } @@ -115,12 +116,14 @@ export function fromCountTokenResponse( export function toGenerateContentRequest( req: GenerateContentParameters, + userPromptId: string, project?: string, sessionId?: string, ): CAGenerateContentRequest { return { model: req.model, project, + user_prompt_id: userPromptId, request: toVertexGenerateContentRequest(req, sessionId), }; } diff --git a/packages/core/src/code_assist/server.test.ts b/packages/core/src/code_assist/server.test.ts index 6246fd4e..3fc1891f 100644 --- a/packages/core/src/code_assist/server.test.ts +++ b/packages/core/src/code_assist/server.test.ts @@ -14,13 +14,25 @@ vi.mock('google-auth-library'); describe('CodeAssistServer', () => { it('should be able to be constructed', () => { const auth = new OAuth2Client(); - const server = new CodeAssistServer(auth, 'test-project'); + const server = new CodeAssistServer( + auth, + 'test-project', + {}, + 'test-session', + UserTierId.FREE, + ); expect(server).toBeInstanceOf(CodeAssistServer); }); it('should call the generateContent endpoint', async () => { const client = new OAuth2Client(); - const server = new CodeAssistServer(client, 'test-project'); + const server = new CodeAssistServer( + client, + 'test-project', + {}, + 'test-session', + UserTierId.FREE, + ); const mockResponse = { response: { candidates: [ @@ -38,10 +50,13 @@ describe('CodeAssistServer', () => { }; vi.spyOn(server, 'requestPost').mockResolvedValue(mockResponse); - const response = await server.generateContent({ - model: 'test-model', - contents: [{ role: 'user', parts: [{ text: 'request' }] }], - }); + const response = await server.generateContent( + { + model: 'test-model', + contents: [{ role: 'user', parts: [{ text: 'request' }] }], + }, + 'user-prompt-id', + ); expect(server.requestPost).toHaveBeenCalledWith( 'generateContent', @@ -55,7 +70,13 @@ describe('CodeAssistServer', () => { it('should call the generateContentStream endpoint', async () => { const client = new OAuth2Client(); - const server = new CodeAssistServer(client, 'test-project'); + const server = new CodeAssistServer( + client, + 'test-project', + {}, + 'test-session', + UserTierId.FREE, + ); const mockResponse = (async function* () { yield { response: { @@ -75,10 +96,13 @@ describe('CodeAssistServer', () => { })(); vi.spyOn(server, 'requestStreamingPost').mockResolvedValue(mockResponse); - const stream = await server.generateContentStream({ - model: 'test-model', - contents: [{ role: 'user', parts: [{ text: 'request' }] }], - }); + const stream = await server.generateContentStream( + { + model: 'test-model', + contents: [{ role: 'user', parts: [{ text: 'request' }] }], + }, + 'user-prompt-id', + ); for await (const res of stream) { expect(server.requestStreamingPost).toHaveBeenCalledWith( @@ -92,7 +116,13 @@ describe('CodeAssistServer', () => { it('should call the onboardUser endpoint', async () => { const client = new OAuth2Client(); - const server = new CodeAssistServer(client, 'test-project'); + const server = new CodeAssistServer( + client, + 'test-project', + {}, + 'test-session', + UserTierId.FREE, + ); const mockResponse = { name: 'operations/123', done: true, @@ -114,7 +144,13 @@ describe('CodeAssistServer', () => { it('should call the loadCodeAssist endpoint', async () => { const client = new OAuth2Client(); - const server = new CodeAssistServer(client, 'test-project'); + const server = new CodeAssistServer( + client, + 'test-project', + {}, + 'test-session', + UserTierId.FREE, + ); const mockResponse = { currentTier: { id: UserTierId.FREE, @@ -140,7 +176,13 @@ describe('CodeAssistServer', () => { it('should return 0 for countTokens', async () => { const client = new OAuth2Client(); - const server = new CodeAssistServer(client, 'test-project'); + const server = new CodeAssistServer( + client, + 'test-project', + {}, + 'test-session', + UserTierId.FREE, + ); const mockResponse = { totalTokens: 100, }; @@ -155,7 +197,13 @@ describe('CodeAssistServer', () => { it('should throw an error for embedContent', async () => { const client = new OAuth2Client(); - const server = new CodeAssistServer(client, 'test-project'); + const server = new CodeAssistServer( + client, + 'test-project', + {}, + 'test-session', + UserTierId.FREE, + ); await expect( server.embedContent({ model: 'test-model', diff --git a/packages/core/src/code_assist/server.ts b/packages/core/src/code_assist/server.ts index 0bbab1f5..8cafa946 100644 --- a/packages/core/src/code_assist/server.ts +++ b/packages/core/src/code_assist/server.ts @@ -53,10 +53,16 @@ export class CodeAssistServer implements ContentGenerator { async generateContentStream( req: GenerateContentParameters, + userPromptId: string, ): Promise> { const resps = await this.requestStreamingPost( 'streamGenerateContent', - toGenerateContentRequest(req, this.projectId, this.sessionId), + toGenerateContentRequest( + req, + userPromptId, + this.projectId, + this.sessionId, + ), req.config?.abortSignal, ); return (async function* (): AsyncGenerator { @@ -68,10 +74,16 @@ export class CodeAssistServer implements ContentGenerator { async generateContent( req: GenerateContentParameters, + userPromptId: string, ): Promise { const resp = await this.requestPost( 'generateContent', - toGenerateContentRequest(req, this.projectId, this.sessionId), + toGenerateContentRequest( + req, + userPromptId, + this.projectId, + this.sessionId, + ), req.config?.abortSignal, ); return fromGenerateContentResponse(resp); diff --git a/packages/core/src/code_assist/setup.test.ts b/packages/core/src/code_assist/setup.test.ts index 6db5fd88..c1260e3f 100644 --- a/packages/core/src/code_assist/setup.test.ts +++ b/packages/core/src/code_assist/setup.test.ts @@ -49,8 +49,11 @@ describe('setupUser', () => { }); await setupUser({} as OAuth2Client); expect(CodeAssistServer).toHaveBeenCalledWith( - expect.any(Object), + {}, 'test-project', + {}, + '', + undefined, ); }); @@ -62,7 +65,10 @@ describe('setupUser', () => { }); const projectId = await setupUser({} as OAuth2Client); expect(CodeAssistServer).toHaveBeenCalledWith( - expect.any(Object), + {}, + undefined, + {}, + '', undefined, ); expect(projectId).toEqual({ diff --git a/packages/core/src/code_assist/setup.ts b/packages/core/src/code_assist/setup.ts index 8831d24b..9c7a8043 100644 --- a/packages/core/src/code_assist/setup.ts +++ b/packages/core/src/code_assist/setup.ts @@ -34,7 +34,7 @@ export interface UserData { */ export async function setupUser(client: OAuth2Client): Promise { let projectId = process.env.GOOGLE_CLOUD_PROJECT || undefined; - const caServer = new CodeAssistServer(client, projectId); + const caServer = new CodeAssistServer(client, projectId, {}, '', undefined); const clientMetadata: ClientMetadata = { ideType: 'IDE_UNSPECIFIED', diff --git a/packages/core/src/config/config.test.ts b/packages/core/src/config/config.test.ts index 3f0b3db5..dd50fd41 100644 --- a/packages/core/src/config/config.test.ts +++ b/packages/core/src/config/config.test.ts @@ -18,6 +18,19 @@ import { } from '../core/contentGenerator.js'; import { GeminiClient } from '../core/client.js'; import { GitService } from '../services/gitService.js'; +import { IdeClient } from '../ide/ide-client.js'; + +vi.mock('fs', async (importOriginal) => { + const actual = await importOriginal(); + return { + ...actual, + existsSync: vi.fn().mockReturnValue(true), + statSync: vi.fn().mockReturnValue({ + isDirectory: vi.fn().mockReturnValue(true), + }), + realpathSync: vi.fn((path) => path), + }; +}); // Mock dependencies that might be called during Config construction or createServerConfig vi.mock('../tools/tool-registry', () => { @@ -107,6 +120,7 @@ describe('Server Config (config.ts)', () => { telemetry: TELEMETRY_SETTINGS, sessionId: SESSION_ID, model: MODEL, + ideClient: IdeClient.getInstance(false), }; beforeEach(() => { @@ -152,6 +166,10 @@ describe('Server Config (config.ts)', () => { (createContentGeneratorConfig as Mock).mockReturnValue(mockContentConfig); + // Set fallback mode to true to ensure it gets reset + config.setFallbackMode(true); + expect(config.isInFallbackMode()).toBe(true); + await config.refreshAuth(authType); expect(createContentGeneratorConfig).toHaveBeenCalledWith( @@ -163,6 +181,89 @@ describe('Server Config (config.ts)', () => { expect(config.getContentGeneratorConfig().model).toBe(newModel); expect(config.getModel()).toBe(newModel); // getModel() should return the updated model expect(GeminiClient).toHaveBeenCalledWith(config); + // Verify that fallback mode is reset + expect(config.isInFallbackMode()).toBe(false); + }); + + it('should preserve conversation history when refreshing auth', async () => { + const config = new Config(baseParams); + const authType = AuthType.USE_GEMINI; + const mockContentConfig = { + model: 'gemini-pro', + apiKey: 'test-key', + }; + + (createContentGeneratorConfig as Mock).mockReturnValue(mockContentConfig); + + // Mock the existing client with some history + const mockExistingHistory = [ + { role: 'user', parts: [{ text: 'Hello' }] }, + { role: 'model', parts: [{ text: 'Hi there!' }] }, + { role: 'user', parts: [{ text: 'How are you?' }] }, + ]; + + const mockExistingClient = { + isInitialized: vi.fn().mockReturnValue(true), + getHistory: vi.fn().mockReturnValue(mockExistingHistory), + }; + + const mockNewClient = { + isInitialized: vi.fn().mockReturnValue(true), + getHistory: vi.fn().mockReturnValue([]), + setHistory: vi.fn(), + initialize: vi.fn().mockResolvedValue(undefined), + }; + + // Set the existing client + ( + config as unknown as { geminiClient: typeof mockExistingClient } + ).geminiClient = mockExistingClient; + (GeminiClient as Mock).mockImplementation(() => mockNewClient); + + await config.refreshAuth(authType); + + // Verify that existing history was retrieved + expect(mockExistingClient.getHistory).toHaveBeenCalled(); + + // Verify that new client was created and initialized + expect(GeminiClient).toHaveBeenCalledWith(config); + expect(mockNewClient.initialize).toHaveBeenCalledWith(mockContentConfig); + + // Verify that history was restored to the new client + expect(mockNewClient.setHistory).toHaveBeenCalledWith( + mockExistingHistory, + ); + }); + + it('should handle case when no existing client is initialized', async () => { + const config = new Config(baseParams); + const authType = AuthType.USE_GEMINI; + const mockContentConfig = { + model: 'gemini-pro', + apiKey: 'test-key', + }; + + (createContentGeneratorConfig as Mock).mockReturnValue(mockContentConfig); + + const mockNewClient = { + isInitialized: vi.fn().mockReturnValue(true), + getHistory: vi.fn().mockReturnValue([]), + setHistory: vi.fn(), + initialize: vi.fn().mockResolvedValue(undefined), + }; + + // No existing client + (config as unknown as { geminiClient: null }).geminiClient = null; + (GeminiClient as Mock).mockImplementation(() => mockNewClient); + + await config.refreshAuth(authType); + + // Verify that new client was created and initialized + expect(GeminiClient).toHaveBeenCalledWith(config); + expect(mockNewClient.initialize).toHaveBeenCalledWith(mockContentConfig); + + // Verify that setHistory was not called since there was no existing history + expect(mockNewClient.setHistory).not.toHaveBeenCalled(); }); }); @@ -213,6 +314,23 @@ describe('Server Config (config.ts)', () => { expect(config.getFileFilteringRespectGitIgnore()).toBe(false); }); + it('should initialize WorkspaceContext with includeDirectories', () => { + const includeDirectories = ['/path/to/dir1', '/path/to/dir2']; + const paramsWithIncludeDirs: ConfigParameters = { + ...baseParams, + includeDirectories, + }; + const config = new Config(paramsWithIncludeDirs); + const workspaceContext = config.getWorkspaceContext(); + const directories = workspaceContext.getDirectories(); + + // Should include the target directory plus the included directories + expect(directories).toHaveLength(3); + expect(directories).toContain(path.resolve(baseParams.targetDir)); + expect(directories).toContain('/path/to/dir1'); + expect(directories).toContain('/path/to/dir2'); + }); + it('Config constructor should set telemetry to true when provided as true', () => { const paramsWithTelemetry: ConfigParameters = { ...baseParams, diff --git a/packages/core/src/config/config.ts b/packages/core/src/config/config.ts index feb67a92..f17b37f9 100644 --- a/packages/core/src/config/config.ts +++ b/packages/core/src/config/config.ts @@ -47,9 +47,11 @@ import { ClearcutLogger } from '../telemetry/clearcut-logger/clearcut-logger.js' import { shouldAttemptBrowserLaunch } from '../utils/browser.js'; import { MCPOAuthConfig } from '../mcp/oauth-provider.js'; import { IdeClient } from '../ide/ide-client.js'; +import type { Content } from '@google/genai'; // Re-export OAuth config type export type { MCPOAuthConfig }; +import { WorkspaceContext } from '../utils/workspaceContext.js'; export enum ApprovalMode { DEFAULT = 'default', @@ -81,6 +83,7 @@ export interface GeminiCLIExtension { name: string; version: string; isActive: boolean; + path: string; } export interface FileFilteringOptions { respectGitIgnore: boolean; @@ -171,6 +174,7 @@ export interface ConfigParameters { proxy?: string; cwd: string; fileDiscoveryService?: FileDiscoveryService; + includeDirectories?: string[]; bugCommand?: BugCommandSettings; model: string; extensionContextFilePaths?: string[]; @@ -183,6 +187,7 @@ export interface ConfigParameters { blockedMcpServers?: Array<{ name: string; extensionName: string }>; noBrowser?: boolean; summarizeToolOutput?: Record; + ideModeFeature?: boolean; ideMode?: boolean; ideClient?: IdeClient; enableOpenAILogging?: boolean; @@ -206,6 +211,7 @@ export class Config { private readonly embeddingModel: string; private readonly sandbox: SandboxConfig | undefined; private readonly targetDir: string; + private workspaceContext: WorkspaceContext; private readonly debugMode: boolean; private readonly question: string | undefined; private readonly fullContext: boolean; @@ -237,14 +243,15 @@ export class Config { private readonly model: string; private readonly extensionContextFilePaths: string[]; private readonly noBrowser: boolean; - private readonly ideMode: boolean; - private readonly ideClient: IdeClient | undefined; + private readonly ideModeFeature: boolean; + private ideMode: boolean; + private ideClient: IdeClient; + private inFallbackMode = false; private readonly systemPromptMappings?: Array<{ baseUrls?: string[]; modelNames?: string[]; template?: string; }>; - private modelSwitchedDuringSession: boolean = false; private readonly maxSessionTurns: number; private readonly sessionTokenLimit: number; private readonly maxFolderItems: number; @@ -272,6 +279,10 @@ export class Config { params.embeddingModel ?? DEFAULT_GEMINI_EMBEDDING_MODEL; this.sandbox = params.sandbox; this.targetDir = path.resolve(params.targetDir); + this.workspaceContext = new WorkspaceContext( + this.targetDir, + params.includeDirectories ?? [], + ); this.debugMode = params.debugMode; this.question = params.question; this.fullContext = params.fullContext ?? false; @@ -317,8 +328,11 @@ export class Config { this._blockedMcpServers = params.blockedMcpServers ?? []; this.noBrowser = params.noBrowser ?? false; this.summarizeToolOutput = params.summarizeToolOutput; + this.ideModeFeature = params.ideModeFeature ?? false; this.ideMode = params.ideMode ?? false; - this.ideClient = params.ideClient; + this.ideClient = + params.ideClient ?? + IdeClient.getInstance(this.ideMode && this.ideModeFeature); this.systemPromptMappings = params.systemPromptMappings; this.enableOpenAILogging = params.enableOpenAILogging ?? false; this.sampling_params = params.sampling_params; @@ -352,16 +366,33 @@ export class Config { } async refreshAuth(authMethod: AuthType) { - this.contentGeneratorConfig = createContentGeneratorConfig( + // Save the current conversation history before creating a new client + let existingHistory: Content[] = []; + if (this.geminiClient && this.geminiClient.isInitialized()) { + existingHistory = this.geminiClient.getHistory(); + } + + // Create new content generator config + const newContentGeneratorConfig = createContentGeneratorConfig( this, authMethod, ); - this.geminiClient = new GeminiClient(this); - await this.geminiClient.initialize(this.contentGeneratorConfig); + // Create and initialize new client in local variable first + const newGeminiClient = new GeminiClient(this); + await newGeminiClient.initialize(newContentGeneratorConfig); + + // Only assign to instance properties after successful initialization + this.contentGeneratorConfig = newContentGeneratorConfig; + this.geminiClient = newGeminiClient; + + // Restore the conversation history to the new client + if (existingHistory.length > 0) { + this.geminiClient.setHistory(existingHistory); + } // Reset the session flag since we're explicitly changing auth and using default model - this.modelSwitchedDuringSession = false; + this.inFallbackMode = false; } getSessionId(): string { @@ -379,19 +410,15 @@ export class Config { setModel(newModel: string): void { if (this.contentGeneratorConfig) { this.contentGeneratorConfig.model = newModel; - this.modelSwitchedDuringSession = true; } } - isModelSwitchedDuringSession(): boolean { - return this.modelSwitchedDuringSession; + isInFallbackMode(): boolean { + return this.inFallbackMode; } - resetModelToDefault(): void { - if (this.contentGeneratorConfig) { - this.contentGeneratorConfig.model = this.model; // Reset to the original default model - this.modelSwitchedDuringSession = false; - } + setFallbackMode(active: boolean): void { + this.inFallbackMode = active; } setFlashFallbackHandler(handler: FlashFallbackHandler): void { @@ -426,6 +453,17 @@ export class Config { return this.sandbox; } + isRestrictiveSandbox(): boolean { + const sandboxConfig = this.getSandbox(); + const seatbeltProfile = process.env.SEATBELT_PROFILE; + return ( + !!sandboxConfig && + sandboxConfig.command === 'sandbox-exec' && + !!seatbeltProfile && + seatbeltProfile.startsWith('restrictive-') + ); + } + getTargetDir(): string { return this.targetDir; } @@ -434,6 +472,10 @@ export class Config { return this.targetDir; } + getWorkspaceContext(): WorkspaceContext { + return this.workspaceContext; + } + getToolRegistry(): Promise { return Promise.resolve(this.toolRegistry); } @@ -620,12 +662,28 @@ export class Config { return this.summarizeToolOutput; } + getIdeModeFeature(): boolean { + return this.ideModeFeature; + } + + getIdeClient(): IdeClient { + return this.ideClient; + } + getIdeMode(): boolean { return this.ideMode; } - getIdeClient(): IdeClient | undefined { - return this.ideClient; + setIdeMode(value: boolean): void { + this.ideMode = value; + } + + setIdeClientDisconnected(): void { + this.ideClient.setDisconnected(); + } + + setIdeClientConnected(): void { + this.ideClient.reconnect(this.ideMode && this.ideModeFeature); } getEnableOpenAILogging(): boolean { diff --git a/packages/core/src/config/flashFallback.test.ts b/packages/core/src/config/flashFallback.test.ts index 64f0f6fd..0b68f993 100644 --- a/packages/core/src/config/flashFallback.test.ts +++ b/packages/core/src/config/flashFallback.test.ts @@ -4,20 +4,29 @@ * SPDX-License-Identifier: Apache-2.0 */ -import { describe, it, expect, beforeEach } from 'vitest'; +import { describe, it, expect, beforeEach, vi } from 'vitest'; import { Config } from './config.js'; import { DEFAULT_GEMINI_MODEL, DEFAULT_GEMINI_FLASH_MODEL } from './models.js'; +import { IdeClient } from '../ide/ide-client.js'; +import fs from 'node:fs'; + +vi.mock('node:fs'); describe('Flash Model Fallback Configuration', () => { let config: Config; beforeEach(() => { + vi.mocked(fs.existsSync).mockReturnValue(true); + vi.mocked(fs.statSync).mockReturnValue({ + isDirectory: () => true, + } as fs.Stats); config = new Config({ sessionId: 'test-session', targetDir: '/test', debugMode: false, cwd: '/test', model: DEFAULT_GEMINI_MODEL, + ideClient: IdeClient.getInstance(false), }); // Initialize contentGeneratorConfig for testing @@ -29,26 +38,11 @@ describe('Flash Model Fallback Configuration', () => { }; }); + // These tests do not actually test fallback. isInFallbackMode() only returns true, + // when setFallbackMode is marked as true. This is to decouple setting a model + // with the fallback mechanism. This will be necessary we introduce more + // intelligent model routing. describe('setModel', () => { - it('should update the model and mark as switched during session', () => { - expect(config.getModel()).toBe(DEFAULT_GEMINI_MODEL); - expect(config.isModelSwitchedDuringSession()).toBe(false); - - config.setModel(DEFAULT_GEMINI_FLASH_MODEL); - - expect(config.getModel()).toBe(DEFAULT_GEMINI_FLASH_MODEL); - expect(config.isModelSwitchedDuringSession()).toBe(true); - }); - - it('should handle multiple model switches during session', () => { - config.setModel(DEFAULT_GEMINI_FLASH_MODEL); - expect(config.isModelSwitchedDuringSession()).toBe(true); - - config.setModel('gemini-1.5-pro'); - expect(config.getModel()).toBe('gemini-1.5-pro'); - expect(config.isModelSwitchedDuringSession()).toBe(true); - }); - it('should only mark as switched if contentGeneratorConfig exists', () => { // Create config without initializing contentGeneratorConfig const newConfig = new Config({ @@ -57,11 +51,12 @@ describe('Flash Model Fallback Configuration', () => { debugMode: false, cwd: '/test', model: DEFAULT_GEMINI_MODEL, + ideClient: IdeClient.getInstance(false), }); // Should not crash when contentGeneratorConfig is undefined newConfig.setModel(DEFAULT_GEMINI_FLASH_MODEL); - expect(newConfig.isModelSwitchedDuringSession()).toBe(false); + expect(newConfig.isInFallbackMode()).toBe(false); }); }); @@ -80,60 +75,32 @@ describe('Flash Model Fallback Configuration', () => { debugMode: false, cwd: '/test', model: 'custom-model', + ideClient: IdeClient.getInstance(false), }); expect(newConfig.getModel()).toBe('custom-model'); }); }); - describe('isModelSwitchedDuringSession', () => { + describe('isInFallbackMode', () => { it('should start as false for new session', () => { - expect(config.isModelSwitchedDuringSession()).toBe(false); + expect(config.isInFallbackMode()).toBe(false); }); it('should remain false if no model switch occurs', () => { // Perform other operations that don't involve model switching - expect(config.isModelSwitchedDuringSession()).toBe(false); + expect(config.isInFallbackMode()).toBe(false); }); it('should persist switched state throughout session', () => { config.setModel(DEFAULT_GEMINI_FLASH_MODEL); - expect(config.isModelSwitchedDuringSession()).toBe(true); + // Setting state for fallback mode as is expected of clients + config.setFallbackMode(true); + expect(config.isInFallbackMode()).toBe(true); // Should remain true even after getting model config.getModel(); - expect(config.isModelSwitchedDuringSession()).toBe(true); - }); - }); - - describe('resetModelToDefault', () => { - it('should reset model to default and clear session switch flag', () => { - // Switch to Flash first - config.setModel(DEFAULT_GEMINI_FLASH_MODEL); - expect(config.getModel()).toBe(DEFAULT_GEMINI_FLASH_MODEL); - expect(config.isModelSwitchedDuringSession()).toBe(true); - - // Reset to default - config.resetModelToDefault(); - - // Should be back to default with flag cleared - expect(config.getModel()).toBe(DEFAULT_GEMINI_MODEL); - expect(config.isModelSwitchedDuringSession()).toBe(false); - }); - - it('should handle case where contentGeneratorConfig is not initialized', () => { - // Create config without initializing contentGeneratorConfig - const newConfig = new Config({ - sessionId: 'test-session-2', - targetDir: '/test', - debugMode: false, - cwd: '/test', - model: DEFAULT_GEMINI_MODEL, - }); - - // Should not crash when contentGeneratorConfig is undefined - expect(() => newConfig.resetModelToDefault()).not.toThrow(); - expect(newConfig.isModelSwitchedDuringSession()).toBe(false); + expect(config.isInFallbackMode()).toBe(true); }); }); }); diff --git a/packages/core/src/config/models.ts b/packages/core/src/config/models.ts index a9e5f745..15ca9cf1 100644 --- a/packages/core/src/config/models.ts +++ b/packages/core/src/config/models.ts @@ -6,4 +6,6 @@ export const DEFAULT_GEMINI_MODEL = 'qwen3-coder-plus'; export const DEFAULT_GEMINI_FLASH_MODEL = 'gemini-2.5-flash'; +export const DEFAULT_GEMINI_FLASH_LITE_MODEL = 'gemini-2.5-flash-lite'; + export const DEFAULT_GEMINI_EMBEDDING_MODEL = 'gemini-embedding-001'; diff --git a/packages/core/src/core/__snapshots__/prompts.test.ts.snap b/packages/core/src/core/__snapshots__/prompts.test.ts.snap index 774bf9c7..2bd104ea 100644 --- a/packages/core/src/core/__snapshots__/prompts.test.ts.snap +++ b/packages/core/src/core/__snapshots__/prompts.test.ts.snap @@ -66,7 +66,7 @@ When requested to perform tasks like fixing bugs, adding features, refactoring, - **Command Execution:** Use the 'run_shell_command' tool for running shell commands, remembering the safety rule to explain modifying commands first. - **Background Processes:** Use background processes (via \`&\`) for commands that are unlikely to stop on their own, e.g. \`node server.js &\`. If unsure, ask the user. - **Interactive Commands:** Try to avoid shell commands that are likely to require user interaction (e.g. \`git rebase -i\`). Use non-interactive versions of commands (e.g. \`npm init -y\` instead of \`npm init\`) when available, and otherwise remind the user that interactive shell commands are not supported and may cause hangs until canceled by the user. -- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information that belongs in project-specific \`GEMINI.md\` files. If unsure whether to save something, you can ask the user, "Should I remember that for you?" +- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information. If unsure whether to save something, you can ask the user, "Should I remember that for you?" - **Respect User Confirmations:** Most tool calls (also denoted as 'function calls') will first require confirmation from the user, where they will either approve or cancel the function call. If a user cancels a function call, respect their choice and do _not_ try to make the function call again. It is okay to request the tool call again _only_ if the user requests that same tool call on a subsequent prompt. When a user cancels a function call, assume best intentions from the user and consider inquiring if they prefer any alternative paths forward. ## Interaction Details @@ -304,7 +304,7 @@ When requested to perform tasks like fixing bugs, adding features, refactoring, - **Command Execution:** Use the 'run_shell_command' tool for running shell commands, remembering the safety rule to explain modifying commands first. - **Background Processes:** Use background processes (via \`&\`) for commands that are unlikely to stop on their own, e.g. \`node server.js &\`. If unsure, ask the user. - **Interactive Commands:** Try to avoid shell commands that are likely to require user interaction (e.g. \`git rebase -i\`). Use non-interactive versions of commands (e.g. \`npm init -y\` instead of \`npm init\`) when available, and otherwise remind the user that interactive shell commands are not supported and may cause hangs until canceled by the user. -- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information that belongs in project-specific \`GEMINI.md\` files. If unsure whether to save something, you can ask the user, "Should I remember that for you?" +- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information. If unsure whether to save something, you can ask the user, "Should I remember that for you?" - **Respect User Confirmations:** Most tool calls (also denoted as 'function calls') will first require confirmation from the user, where they will either approve or cancel the function call. If a user cancels a function call, respect their choice and do _not_ try to make the function call again. It is okay to request the tool call again _only_ if the user requests that same tool call on a subsequent prompt. When a user cancels a function call, assume best intentions from the user and consider inquiring if they prefer any alternative paths forward. ## Interaction Details @@ -552,7 +552,7 @@ When requested to perform tasks like fixing bugs, adding features, refactoring, - **Command Execution:** Use the 'run_shell_command' tool for running shell commands, remembering the safety rule to explain modifying commands first. - **Background Processes:** Use background processes (via \`&\`) for commands that are unlikely to stop on their own, e.g. \`node server.js &\`. If unsure, ask the user. - **Interactive Commands:** Try to avoid shell commands that are likely to require user interaction (e.g. \`git rebase -i\`). Use non-interactive versions of commands (e.g. \`npm init -y\` instead of \`npm init\`) when available, and otherwise remind the user that interactive shell commands are not supported and may cause hangs until canceled by the user. -- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information that belongs in project-specific \`GEMINI.md\` files. If unsure whether to save something, you can ask the user, "Should I remember that for you?" +- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information. If unsure whether to save something, you can ask the user, "Should I remember that for you?" - **Respect User Confirmations:** Most tool calls (also denoted as 'function calls') will first require confirmation from the user, where they will either approve or cancel the function call. If a user cancels a function call, respect their choice and do _not_ try to make the function call again. It is okay to request the tool call again _only_ if the user requests that same tool call on a subsequent prompt. When a user cancels a function call, assume best intentions from the user and consider inquiring if they prefer any alternative paths forward. ## Interaction Details @@ -785,7 +785,7 @@ When requested to perform tasks like fixing bugs, adding features, refactoring, - **Command Execution:** Use the 'run_shell_command' tool for running shell commands, remembering the safety rule to explain modifying commands first. - **Background Processes:** Use background processes (via \`&\`) for commands that are unlikely to stop on their own, e.g. \`node server.js &\`. If unsure, ask the user. - **Interactive Commands:** Try to avoid shell commands that are likely to require user interaction (e.g. \`git rebase -i\`). Use non-interactive versions of commands (e.g. \`npm init -y\` instead of \`npm init\`) when available, and otherwise remind the user that interactive shell commands are not supported and may cause hangs until canceled by the user. -- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information that belongs in project-specific \`GEMINI.md\` files. If unsure whether to save something, you can ask the user, "Should I remember that for you?" +- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information. If unsure whether to save something, you can ask the user, "Should I remember that for you?" - **Respect User Confirmations:** Most tool calls (also denoted as 'function calls') will first require confirmation from the user, where they will either approve or cancel the function call. If a user cancels a function call, respect their choice and do _not_ try to make the function call again. It is okay to request the tool call again _only_ if the user requests that same tool call on a subsequent prompt. When a user cancels a function call, assume best intentions from the user and consider inquiring if they prefer any alternative paths forward. ## Interaction Details @@ -1018,7 +1018,7 @@ When requested to perform tasks like fixing bugs, adding features, refactoring, - **Command Execution:** Use the 'run_shell_command' tool for running shell commands, remembering the safety rule to explain modifying commands first. - **Background Processes:** Use background processes (via \`&\`) for commands that are unlikely to stop on their own, e.g. \`node server.js &\`. If unsure, ask the user. - **Interactive Commands:** Try to avoid shell commands that are likely to require user interaction (e.g. \`git rebase -i\`). Use non-interactive versions of commands (e.g. \`npm init -y\` instead of \`npm init\`) when available, and otherwise remind the user that interactive shell commands are not supported and may cause hangs until canceled by the user. -- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information that belongs in project-specific \`GEMINI.md\` files. If unsure whether to save something, you can ask the user, "Should I remember that for you?" +- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information. If unsure whether to save something, you can ask the user, "Should I remember that for you?" - **Respect User Confirmations:** Most tool calls (also denoted as 'function calls') will first require confirmation from the user, where they will either approve or cancel the function call. If a user cancels a function call, respect their choice and do _not_ try to make the function call again. It is okay to request the tool call again _only_ if the user requests that same tool call on a subsequent prompt. When a user cancels a function call, assume best intentions from the user and consider inquiring if they prefer any alternative paths forward. ## Interaction Details @@ -1251,7 +1251,7 @@ When requested to perform tasks like fixing bugs, adding features, refactoring, - **Command Execution:** Use the 'run_shell_command' tool for running shell commands, remembering the safety rule to explain modifying commands first. - **Background Processes:** Use background processes (via \`&\`) for commands that are unlikely to stop on their own, e.g. \`node server.js &\`. If unsure, ask the user. - **Interactive Commands:** Try to avoid shell commands that are likely to require user interaction (e.g. \`git rebase -i\`). Use non-interactive versions of commands (e.g. \`npm init -y\` instead of \`npm init\`) when available, and otherwise remind the user that interactive shell commands are not supported and may cause hangs until canceled by the user. -- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information that belongs in project-specific \`GEMINI.md\` files. If unsure whether to save something, you can ask the user, "Should I remember that for you?" +- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information. If unsure whether to save something, you can ask the user, "Should I remember that for you?" - **Respect User Confirmations:** Most tool calls (also denoted as 'function calls') will first require confirmation from the user, where they will either approve or cancel the function call. If a user cancels a function call, respect their choice and do _not_ try to make the function call again. It is okay to request the tool call again _only_ if the user requests that same tool call on a subsequent prompt. When a user cancels a function call, assume best intentions from the user and consider inquiring if they prefer any alternative paths forward. ## Interaction Details @@ -1484,7 +1484,7 @@ When requested to perform tasks like fixing bugs, adding features, refactoring, - **Command Execution:** Use the 'run_shell_command' tool for running shell commands, remembering the safety rule to explain modifying commands first. - **Background Processes:** Use background processes (via \`&\`) for commands that are unlikely to stop on their own, e.g. \`node server.js &\`. If unsure, ask the user. - **Interactive Commands:** Try to avoid shell commands that are likely to require user interaction (e.g. \`git rebase -i\`). Use non-interactive versions of commands (e.g. \`npm init -y\` instead of \`npm init\`) when available, and otherwise remind the user that interactive shell commands are not supported and may cause hangs until canceled by the user. -- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information that belongs in project-specific \`GEMINI.md\` files. If unsure whether to save something, you can ask the user, "Should I remember that for you?" +- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information. If unsure whether to save something, you can ask the user, "Should I remember that for you?" - **Respect User Confirmations:** Most tool calls (also denoted as 'function calls') will first require confirmation from the user, where they will either approve or cancel the function call. If a user cancels a function call, respect their choice and do _not_ try to make the function call again. It is okay to request the tool call again _only_ if the user requests that same tool call on a subsequent prompt. When a user cancels a function call, assume best intentions from the user and consider inquiring if they prefer any alternative paths forward. ## Interaction Details @@ -1717,7 +1717,7 @@ When requested to perform tasks like fixing bugs, adding features, refactoring, - **Command Execution:** Use the 'run_shell_command' tool for running shell commands, remembering the safety rule to explain modifying commands first. - **Background Processes:** Use background processes (via \`&\`) for commands that are unlikely to stop on their own, e.g. \`node server.js &\`. If unsure, ask the user. - **Interactive Commands:** Try to avoid shell commands that are likely to require user interaction (e.g. \`git rebase -i\`). Use non-interactive versions of commands (e.g. \`npm init -y\` instead of \`npm init\`) when available, and otherwise remind the user that interactive shell commands are not supported and may cause hangs until canceled by the user. -- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information that belongs in project-specific \`GEMINI.md\` files. If unsure whether to save something, you can ask the user, "Should I remember that for you?" +- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information. If unsure whether to save something, you can ask the user, "Should I remember that for you?" - **Respect User Confirmations:** Most tool calls (also denoted as 'function calls') will first require confirmation from the user, where they will either approve or cancel the function call. If a user cancels a function call, respect their choice and do _not_ try to make the function call again. It is okay to request the tool call again _only_ if the user requests that same tool call on a subsequent prompt. When a user cancels a function call, assume best intentions from the user and consider inquiring if they prefer any alternative paths forward. ## Interaction Details @@ -1950,7 +1950,7 @@ When requested to perform tasks like fixing bugs, adding features, refactoring, - **Command Execution:** Use the 'run_shell_command' tool for running shell commands, remembering the safety rule to explain modifying commands first. - **Background Processes:** Use background processes (via \`&\`) for commands that are unlikely to stop on their own, e.g. \`node server.js &\`. If unsure, ask the user. - **Interactive Commands:** Try to avoid shell commands that are likely to require user interaction (e.g. \`git rebase -i\`). Use non-interactive versions of commands (e.g. \`npm init -y\` instead of \`npm init\`) when available, and otherwise remind the user that interactive shell commands are not supported and may cause hangs until canceled by the user. -- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information that belongs in project-specific \`GEMINI.md\` files. If unsure whether to save something, you can ask the user, "Should I remember that for you?" +- **Remembering Facts:** Use the 'save_memory' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information. If unsure whether to save something, you can ask the user, "Should I remember that for you?" - **Respect User Confirmations:** Most tool calls (also denoted as 'function calls') will first require confirmation from the user, where they will either approve or cancel the function call. If a user cancels a function call, respect their choice and do _not_ try to make the function call again. It is okay to request the tool call again _only_ if the user requests that same tool call on a subsequent prompt. When a user cancels a function call, assume best intentions from the user and consider inquiring if they prefer any alternative paths forward. ## Interaction Details diff --git a/packages/core/src/core/client.test.ts b/packages/core/src/core/client.test.ts index f091a8cd..0af01726 100644 --- a/packages/core/src/core/client.test.ts +++ b/packages/core/src/core/client.test.ts @@ -202,8 +202,13 @@ describe('Gemini Client (client.ts)', () => { getNoBrowser: vi.fn().mockReturnValue(false), getSystemPromptMappings: vi.fn().mockReturnValue(undefined), getUsageStatisticsEnabled: vi.fn().mockReturnValue(true), - getIdeMode: vi.fn().mockReturnValue(false), + getIdeModeFeature: vi.fn().mockReturnValue(false), + getIdeMode: vi.fn().mockReturnValue(true), + getWorkspaceContext: vi.fn().mockReturnValue({ + getDirectories: vi.fn().mockReturnValue(['/test/dir']), + }), getGeminiClient: vi.fn(), + setFallbackMode: vi.fn(), }; const MockedConfig = vi.mocked(Config, true); MockedConfig.mockImplementation( @@ -212,7 +217,9 @@ describe('Gemini Client (client.ts)', () => { // We can instantiate the client here since Config is mocked // and the constructor will use the mocked GoogleGenAI - client = new GeminiClient(new Config({} as never)); + client = new GeminiClient( + new Config({ sessionId: 'test-session-id' } as never), + ); mockConfigObject.getGeminiClient.mockReturnValue(client); await client.initialize(contentGeneratorConfig); @@ -351,16 +358,19 @@ describe('Gemini Client (client.ts)', () => { await client.generateContent(contents, generationConfig, abortSignal); - expect(mockGenerateContentFn).toHaveBeenCalledWith({ - model: 'test-model', - config: { - abortSignal, - systemInstruction: getCoreSystemPrompt(''), - temperature: 0.5, - topP: 1, + expect(mockGenerateContentFn).toHaveBeenCalledWith( + { + model: 'test-model', + config: { + abortSignal, + systemInstruction: getCoreSystemPrompt(''), + temperature: 0.5, + topP: 1, + }, + contents, }, - contents, - }); + 'test-session-id', + ); }); }); @@ -379,18 +389,21 @@ describe('Gemini Client (client.ts)', () => { await client.generateJson(contents, schema, abortSignal); - expect(mockGenerateContentFn).toHaveBeenCalledWith({ - model: 'test-model', // Should use current model from config - config: { - abortSignal, - systemInstruction: getCoreSystemPrompt(''), - temperature: 0, - topP: 1, - responseSchema: schema, - responseMimeType: 'application/json', + expect(mockGenerateContentFn).toHaveBeenCalledWith( + { + model: 'test-model', // Should use current model from config + config: { + abortSignal, + systemInstruction: getCoreSystemPrompt(''), + temperature: 0, + topP: 1, + responseSchema: schema, + responseMimeType: 'application/json', + }, + contents, }, - contents, - }); + 'test-session-id', + ); }); it('should allow overriding model and config', async () => { @@ -414,19 +427,22 @@ describe('Gemini Client (client.ts)', () => { customConfig, ); - expect(mockGenerateContentFn).toHaveBeenCalledWith({ - model: customModel, - config: { - abortSignal, - systemInstruction: getCoreSystemPrompt(''), - temperature: 0.9, - topP: 1, // from default - topK: 20, - responseSchema: schema, - responseMimeType: 'application/json', + expect(mockGenerateContentFn).toHaveBeenCalledWith( + { + model: customModel, + config: { + abortSignal, + systemInstruction: getCoreSystemPrompt(''), + temperature: 0.9, + topP: 1, // from default + topK: 20, + responseSchema: schema, + responseMimeType: 'application/json', + }, + contents, }, - contents, - }); + 'test-session-id', + ); }); }); @@ -648,19 +664,31 @@ describe('Gemini Client (client.ts)', () => { }); describe('sendMessageStream', () => { - it('should include IDE context when ideMode is enabled', async () => { + it('should include IDE context when ideModeFeature is enabled', async () => { // Arrange - vi.mocked(ideContext.getOpenFilesContext).mockReturnValue({ - activeFile: '/path/to/active/file.ts', - selectedText: 'hello', - cursor: { line: 5, character: 10 }, - recentOpenFiles: [ - { filePath: '/path/to/recent/file1.ts', timestamp: Date.now() }, - { filePath: '/path/to/recent/file2.ts', timestamp: Date.now() }, - ], + vi.mocked(ideContext.getIdeContext).mockReturnValue({ + workspaceState: { + openFiles: [ + { + path: '/path/to/active/file.ts', + timestamp: Date.now(), + isActive: true, + selectedText: 'hello', + cursor: { line: 5, character: 10 }, + }, + { + path: '/path/to/recent/file1.ts', + timestamp: Date.now(), + }, + { + path: '/path/to/recent/file2.ts', + timestamp: Date.now(), + }, + ], + }, }); - vi.spyOn(client['config'], 'getIdeMode').mockReturnValue(true); + vi.spyOn(client['config'], 'getIdeModeFeature').mockReturnValue(true); const mockStream = (async function* () { yield { type: 'content', value: 'Hello' }; @@ -692,15 +720,188 @@ describe('Gemini Client (client.ts)', () => { } // Assert - expect(ideContext.getOpenFilesContext).toHaveBeenCalled(); + expect(ideContext.getIdeContext).toHaveBeenCalled(); const expectedContext = ` -This is the file that the user was most recently looking at: +This is the file that the user is looking at: - Path: /path/to/active/file.ts This is the cursor position in the file: - Cursor Position: Line 5, Character 10 -This is the selected text in the active file: +This is the selected text in the file: - hello -Here are files the user has recently opened, with the most recent at the top: +Here are some other files the user has open, with the most recent at the top: +- /path/to/recent/file1.ts +- /path/to/recent/file2.ts + `.trim(); + const expectedRequest = [{ text: expectedContext }, ...initialRequest]; + expect(mockTurnRunFn).toHaveBeenCalledWith( + expectedRequest, + expect.any(Object), + ); + }); + + it('should not add context if ideModeFeature is enabled but no open files', async () => { + // Arrange + vi.mocked(ideContext.getIdeContext).mockReturnValue({ + workspaceState: { + openFiles: [], + }, + }); + + vi.spyOn(client['config'], 'getIdeModeFeature').mockReturnValue(true); + + const mockStream = (async function* () { + yield { type: 'content', value: 'Hello' }; + })(); + mockTurnRunFn.mockReturnValue(mockStream); + + const mockChat: Partial = { + addHistory: vi.fn(), + getHistory: vi.fn().mockReturnValue([]), + }; + client['chat'] = mockChat as GeminiChat; + + const mockGenerator: Partial = { + countTokens: vi.fn().mockResolvedValue({ totalTokens: 0 }), + generateContent: mockGenerateContentFn, + }; + client['contentGenerator'] = mockGenerator as ContentGenerator; + + const initialRequest = [{ text: 'Hi' }]; + + // Act + const stream = client.sendMessageStream( + initialRequest, + new AbortController().signal, + 'prompt-id-ide', + ); + for await (const _ of stream) { + // consume stream + } + + // Assert + expect(ideContext.getIdeContext).toHaveBeenCalled(); + expect(mockTurnRunFn).toHaveBeenCalledWith( + initialRequest, + expect.any(Object), + ); + }); + + it('should add context if ideModeFeature is enabled and there is one active file', async () => { + // Arrange + vi.mocked(ideContext.getIdeContext).mockReturnValue({ + workspaceState: { + openFiles: [ + { + path: '/path/to/active/file.ts', + timestamp: Date.now(), + isActive: true, + selectedText: 'hello', + cursor: { line: 5, character: 10 }, + }, + ], + }, + }); + + vi.spyOn(client['config'], 'getIdeModeFeature').mockReturnValue(true); + + const mockStream = (async function* () { + yield { type: 'content', value: 'Hello' }; + })(); + mockTurnRunFn.mockReturnValue(mockStream); + + const mockChat: Partial = { + addHistory: vi.fn(), + getHistory: vi.fn().mockReturnValue([]), + }; + client['chat'] = mockChat as GeminiChat; + + const mockGenerator: Partial = { + countTokens: vi.fn().mockResolvedValue({ totalTokens: 0 }), + generateContent: mockGenerateContentFn, + }; + client['contentGenerator'] = mockGenerator as ContentGenerator; + + const initialRequest = [{ text: 'Hi' }]; + + // Act + const stream = client.sendMessageStream( + initialRequest, + new AbortController().signal, + 'prompt-id-ide', + ); + for await (const _ of stream) { + // consume stream + } + + // Assert + expect(ideContext.getIdeContext).toHaveBeenCalled(); + const expectedContext = ` +This is the file that the user is looking at: +- Path: /path/to/active/file.ts +This is the cursor position in the file: +- Cursor Position: Line 5, Character 10 +This is the selected text in the file: +- hello + `.trim(); + const expectedRequest = [{ text: expectedContext }, ...initialRequest]; + expect(mockTurnRunFn).toHaveBeenCalledWith( + expectedRequest, + expect.any(Object), + ); + }); + + it('should add context if ideModeFeature is enabled and there are open files but no active file', async () => { + // Arrange + vi.mocked(ideContext.getIdeContext).mockReturnValue({ + workspaceState: { + openFiles: [ + { + path: '/path/to/recent/file1.ts', + timestamp: Date.now(), + }, + { + path: '/path/to/recent/file2.ts', + timestamp: Date.now(), + }, + ], + }, + }); + + vi.spyOn(client['config'], 'getIdeModeFeature').mockReturnValue(true); + + const mockStream = (async function* () { + yield { type: 'content', value: 'Hello' }; + })(); + mockTurnRunFn.mockReturnValue(mockStream); + + const mockChat: Partial = { + addHistory: vi.fn(), + getHistory: vi.fn().mockReturnValue([]), + }; + client['chat'] = mockChat as GeminiChat; + + const mockGenerator: Partial = { + countTokens: vi.fn().mockResolvedValue({ totalTokens: 0 }), + generateContent: mockGenerateContentFn, + }; + client['contentGenerator'] = mockGenerator as ContentGenerator; + + const initialRequest = [{ text: 'Hi' }]; + + // Act + const stream = client.sendMessageStream( + initialRequest, + new AbortController().signal, + 'prompt-id-ide', + ); + for await (const _ of stream) { + // consume stream + } + + // Assert + expect(ideContext.getIdeContext).toHaveBeenCalled(); + const expectedContext = ` +Here are some files the user has open, with the most recent at the top: - /path/to/recent/file1.ts - /path/to/recent/file2.ts `.trim(); @@ -1009,11 +1210,14 @@ Here are files the user has recently opened, with the most recent at the top: config: expect.any(Object), contents, }); - expect(mockGenerateContentFn).toHaveBeenCalledWith({ - model: currentModel, - config: expect.any(Object), - contents, - }); + expect(mockGenerateContentFn).toHaveBeenCalledWith( + { + model: currentModel, + config: expect.any(Object), + contents, + }, + 'test-session-id', + ); }); }); @@ -1080,7 +1284,8 @@ Here are files the user has recently opened, with the most recent at the top: // mock config been changed const currentModel = initialModel + '-changed'; - vi.spyOn(client['config'], 'getModel').mockReturnValueOnce(currentModel); + const getModelSpy = vi.spyOn(client['config'], 'getModel'); + getModelSpy.mockReturnValue(currentModel); const mockFallbackHandler = vi.fn().mockResolvedValue(true); client['config'].flashFallbackHandler = mockFallbackHandler; diff --git a/packages/core/src/core/client.ts b/packages/core/src/core/client.ts index f762f7f9..e70093d8 100644 --- a/packages/core/src/core/client.ts +++ b/packages/core/src/core/client.ts @@ -43,8 +43,12 @@ import { ProxyAgent, setGlobalDispatcher } from 'undici'; import { DEFAULT_GEMINI_FLASH_MODEL } from '../config/models.js'; import { LoopDetectionService } from '../services/loopDetectionService.js'; import { ideContext } from '../ide/ideContext.js'; -import { logFlashDecidedToContinue } from '../telemetry/loggers.js'; -import { FlashDecidedToContinueEvent } from '../telemetry/types.js'; +import { logNextSpeakerCheck } from '../telemetry/loggers.js'; +import { + MalformedJsonResponseEvent, + NextSpeakerCheckEvent, +} from '../telemetry/types.js'; +import { ClearcutLogger } from '../telemetry/clearcut-logger/clearcut-logger.js'; function isThinkingSupported(model: string) { if (model.startsWith('gemini-2.5')) return true; @@ -106,7 +110,7 @@ export class GeminiClient { private readonly COMPRESSION_PRESERVE_THRESHOLD = 0.3; private readonly loopDetector: LoopDetectionService; - private lastPromptId?: string; + private lastPromptId: string; constructor(private config: Config) { if (config.getProxy()) { @@ -115,6 +119,7 @@ export class GeminiClient { this.embeddingModel = config.getEmbeddingModel(); this.loopDetector = new LoopDetectionService(config); + this.lastPromptId = this.config.getSessionId(); } async initialize(contentGeneratorConfig: ContentGeneratorConfig) { @@ -171,8 +176,36 @@ export class GeminiClient { this.chat = await this.startChat(); } + async addDirectoryContext(): Promise { + if (!this.chat) { + return; + } + + this.getChat().addHistory({ + role: 'user', + parts: [{ text: await this.getDirectoryContext() }], + }); + } + + private async getDirectoryContext(): Promise { + const workspaceContext = this.config.getWorkspaceContext(); + const workspaceDirectories = workspaceContext.getDirectories(); + + const folderStructures = await Promise.all( + workspaceDirectories.map((dir) => + getFolderStructure(dir, { + fileService: this.config.getFileService(), + }), + ), + ); + + const folderStructure = folderStructures.join('\n'); + const dirList = workspaceDirectories.map((dir) => ` - ${dir}`).join('\n'); + const workingDirPreamble = `I'm currently working in the following directories:\n${dirList}\n Folder structures are as follows:\n${folderStructure}`; + return workingDirPreamble; + } + private async getEnvironment(): Promise { - const cwd = this.config.getWorkingDir(); const today = new Date().toLocaleDateString(undefined, { weekday: 'long', year: 'numeric', @@ -180,15 +213,36 @@ export class GeminiClient { day: 'numeric', }); const platform = process.platform; - const folderStructure = await getFolderStructure(cwd, { - fileService: this.config.getFileService(), - maxItems: this.config.getMaxFolderItems(), - }); + + const workspaceContext = this.config.getWorkspaceContext(); + const workspaceDirectories = workspaceContext.getDirectories(); + + const folderStructures = await Promise.all( + workspaceDirectories.map((dir) => + getFolderStructure(dir, { + fileService: this.config.getFileService(), + }), + ), + ); + + const folderStructure = folderStructures.join('\n'); + + let workingDirPreamble: string; + if (workspaceDirectories.length === 1) { + workingDirPreamble = `I'm currently working in the directory: ${workspaceDirectories[0]}`; + } else { + const dirList = workspaceDirectories + .map((dir) => ` - ${dir}`) + .join('\n'); + workingDirPreamble = `I'm currently working in the following directories:\n${dirList}`; + } + const context = ` This is the Qwen Code. We are setting up the context for our chat. Today's date is ${today}. My operating system is: ${platform} - I'm currently working in the directory: ${cwd} + ${workingDirPreamble} + Here is the folder structure of the current working directories:\n ${folderStructure} `.trim(); @@ -363,33 +417,41 @@ export class GeminiClient { } } - if (this.config.getIdeMode()) { - const openFiles = ideContext.getOpenFilesContext(); - if (openFiles) { + if (this.config.getIdeModeFeature() && this.config.getIdeMode()) { + const ideContextState = ideContext.getIdeContext(); + const openFiles = ideContextState?.workspaceState?.openFiles; + + if (openFiles && openFiles.length > 0) { const contextParts: string[] = []; - if (openFiles.activeFile) { + const firstFile = openFiles[0]; + const activeFile = firstFile.isActive ? firstFile : undefined; + + if (activeFile) { contextParts.push( - `This is the file that the user was most recently looking at:\n- Path: ${openFiles.activeFile}`, + `This is the file that the user is looking at:\n- Path: ${activeFile.path}`, ); - if (openFiles.cursor) { + if (activeFile.cursor) { contextParts.push( - `This is the cursor position in the file:\n- Cursor Position: Line ${openFiles.cursor.line}, Character ${openFiles.cursor.character}`, + `This is the cursor position in the file:\n- Cursor Position: Line ${activeFile.cursor.line}, Character ${activeFile.cursor.character}`, ); } - if (openFiles.selectedText) { + if (activeFile.selectedText) { contextParts.push( - `This is the selected text in the active file:\n- ${openFiles.selectedText}`, + `This is the selected text in the file:\n- ${activeFile.selectedText}`, ); } } - if (openFiles.recentOpenFiles && openFiles.recentOpenFiles.length > 0) { - const recentFiles = openFiles.recentOpenFiles - .map((file) => `- ${file.filePath}`) + const otherOpenFiles = activeFile ? openFiles.slice(1) : openFiles; + + if (otherOpenFiles.length > 0) { + const recentFiles = otherOpenFiles + .map((file) => `- ${file.path}`) .join('\n'); - contextParts.push( - `Here are files the user has recently opened, with the most recent at the top:\n${recentFiles}`, - ); + const heading = activeFile + ? `Here are some other files the user has open, with the most recent at the top:` + : `Here are some files the user has open, with the most recent at the top:`; + contextParts.push(`${heading}\n${recentFiles}`); } if (contextParts.length > 0) { @@ -431,11 +493,15 @@ export class GeminiClient { this, signal, ); + logNextSpeakerCheck( + this.config, + new NextSpeakerCheckEvent( + prompt_id, + turn.finishReason?.toString() || '', + nextSpeakerCheck?.next_speaker || '', + ), + ); if (nextSpeakerCheck?.next_speaker === 'model') { - logFlashDecidedToContinue( - this.config, - new FlashDecidedToContinueEvent(prompt_id), - ); const nextRequest = [{ text: 'Please continue.' }]; // This recursive call's events will be yielded out, but the final // turn object will be from the top-level call. @@ -474,16 +540,19 @@ export class GeminiClient { }; const apiCall = () => - this.getContentGenerator().generateContent({ - model: modelToUse, - config: { - ...requestConfig, - systemInstruction, - responseSchema: schema, - responseMimeType: 'application/json', + this.getContentGenerator().generateContent( + { + model: modelToUse, + config: { + ...requestConfig, + systemInstruction, + responseSchema: schema, + responseMimeType: 'application/json', + }, + contents, }, - contents, - }); + this.lastPromptId, + ); const result = await retryWithBackoff(apiCall, { onPersistent429: async (authType?: string, error?: unknown) => @@ -491,7 +560,7 @@ export class GeminiClient { authType: this.config.getContentGeneratorConfig()?.authType, }); - const text = getResponseText(result); + let text = getResponseText(result); if (!text) { const error = new Error( 'API returned an empty response for generateJson.', @@ -504,6 +573,18 @@ export class GeminiClient { ); throw error; } + + const prefix = '```json'; + const suffix = '```'; + if (text.startsWith(prefix) && text.endsWith(suffix)) { + ClearcutLogger.getInstance(this.config)?.logMalformedJsonResponseEvent( + new MalformedJsonResponseEvent(modelToUse), + ); + text = text + .substring(prefix.length, text.length - suffix.length) + .trim(); + } + try { // Try to extract JSON from various formats const extractors = [ @@ -540,7 +621,9 @@ export class GeminiClient { 'generateJson-parse', ); throw new Error( - `Failed to parse API response as JSON: ${getErrorMessage(parseError)}`, + `Failed to parse API response as JSON: ${getErrorMessage( + parseError, + )}`, ); } } catch (error) { @@ -594,11 +677,14 @@ export class GeminiClient { }; const apiCall = () => - this.getContentGenerator().generateContent({ - model: modelToUse, - config: requestConfig, - contents, - }); + this.getContentGenerator().generateContent( + { + model: modelToUse, + config: requestConfig, + contents, + }, + this.lastPromptId, + ); const result = await retryWithBackoff(apiCall, { onPersistent429: async (authType?: string, error?: unknown) => @@ -782,6 +868,7 @@ export class GeminiClient { ); if (accepted !== false && accepted !== null) { this.config.setModel(fallbackModel); + this.config.setFallbackMode(true); return fallbackModel; } // Check if the model was switched manually in the handler diff --git a/packages/core/src/core/contentGenerator.ts b/packages/core/src/core/contentGenerator.ts index 3f6e74b0..7dc16564 100644 --- a/packages/core/src/core/contentGenerator.ts +++ b/packages/core/src/core/contentGenerator.ts @@ -25,10 +25,12 @@ import { UserTierId } from '../code_assist/types.js'; export interface ContentGenerator { generateContent( request: GenerateContentParameters, + userPromptId: string, ): Promise; generateContentStream( request: GenerateContentParameters, + userPromptId: string, ): Promise>; countTokens(request: CountTokensParameters): Promise; diff --git a/packages/core/src/core/coreToolScheduler.test.ts b/packages/core/src/core/coreToolScheduler.test.ts index 7b6a130c..80651a14 100644 --- a/packages/core/src/core/coreToolScheduler.test.ts +++ b/packages/core/src/core/coreToolScheduler.test.ts @@ -20,6 +20,7 @@ import { ToolResult, Config, Icon, + ApprovalMode, } from '../index.js'; import { Part, PartListUnion } from '@google/genai'; @@ -126,6 +127,7 @@ describe('CoreToolScheduler', () => { getSessionId: () => 'test-session-id', getUsageStatisticsEnabled: () => true, getDebugMode: () => false, + getApprovalMode: () => ApprovalMode.DEFAULT, } as unknown as Config; const scheduler = new CoreToolScheduler({ @@ -194,6 +196,7 @@ describe('CoreToolScheduler with payload', () => { getSessionId: () => 'test-session-id', getUsageStatisticsEnabled: () => true, getDebugMode: () => false, + getApprovalMode: () => ApprovalMode.DEFAULT, } as unknown as Config; const scheduler = new CoreToolScheduler({ @@ -470,6 +473,7 @@ describe('CoreToolScheduler edit cancellation', () => { getSessionId: () => 'test-session-id', getUsageStatisticsEnabled: () => true, getDebugMode: () => false, + getApprovalMode: () => ApprovalMode.DEFAULT, } as unknown as Config; const scheduler = new CoreToolScheduler({ @@ -527,3 +531,85 @@ describe('CoreToolScheduler edit cancellation', () => { expect(cancelledCall.response.resultDisplay.fileName).toBe('test.txt'); }); }); + +describe('CoreToolScheduler YOLO mode', () => { + it('should execute tool requiring confirmation directly without waiting', async () => { + // Arrange + const mockTool = new MockTool(); + // This tool would normally require confirmation. + mockTool.shouldConfirm = true; + + const toolRegistry = { + getTool: () => mockTool, + getToolByName: () => mockTool, + // Other properties are not needed for this test but are included for type consistency. + getFunctionDeclarations: () => [], + tools: new Map(), + discovery: {} as any, + registerTool: () => {}, + getToolByDisplayName: () => mockTool, + getTools: () => [], + discoverTools: async () => {}, + getAllTools: () => [], + getToolsByServer: () => [], + }; + + const onAllToolCallsComplete = vi.fn(); + const onToolCallsUpdate = vi.fn(); + + // Configure the scheduler for YOLO mode. + const mockConfig = { + getSessionId: () => 'test-session-id', + getUsageStatisticsEnabled: () => true, + getDebugMode: () => false, + getApprovalMode: () => ApprovalMode.YOLO, + } as unknown as Config; + + const scheduler = new CoreToolScheduler({ + config: mockConfig, + toolRegistry: Promise.resolve(toolRegistry as any), + onAllToolCallsComplete, + onToolCallsUpdate, + getPreferredEditor: () => 'vscode', + }); + + const abortController = new AbortController(); + const request = { + callId: '1', + name: 'mockTool', + args: { param: 'value' }, + isClientInitiated: false, + prompt_id: 'prompt-id-yolo', + }; + + // Act + await scheduler.schedule([request], abortController.signal); + + // Assert + // 1. The tool's execute method was called directly. + expect(mockTool.executeFn).toHaveBeenCalledWith({ param: 'value' }); + + // 2. The tool call status never entered 'awaiting_approval'. + const statusUpdates = onToolCallsUpdate.mock.calls + .map((call) => (call[0][0] as ToolCall)?.status) + .filter(Boolean); + expect(statusUpdates).not.toContain('awaiting_approval'); + expect(statusUpdates).toEqual([ + 'validating', + 'scheduled', + 'executing', + 'success', + ]); + + // 3. The final callback indicates the tool call was successful. + expect(onAllToolCallsComplete).toHaveBeenCalled(); + const completedCalls = onAllToolCallsComplete.mock + .calls[0][0] as ToolCall[]; + expect(completedCalls).toHaveLength(1); + const completedCall = completedCalls[0]; + expect(completedCall.status).toBe('success'); + if (completedCall.status === 'success') { + expect(completedCall.response.resultDisplay).toBe('Tool executed'); + } + }); +}); diff --git a/packages/core/src/core/coreToolScheduler.ts b/packages/core/src/core/coreToolScheduler.ts index 0d7d5923..b4c10a64 100644 --- a/packages/core/src/core/coreToolScheduler.ts +++ b/packages/core/src/core/coreToolScheduler.ts @@ -19,6 +19,7 @@ import { logToolCall, ToolCallEvent, ToolConfirmationPayload, + ToolErrorType, } from '../index.js'; import { Part, PartListUnion } from '@google/genai'; import { getResponseTextFromParts } from '../utils/generateContentResponseUtilities.js'; @@ -201,6 +202,7 @@ export function convertToFunctionResponse( const createErrorResponse = ( request: ToolCallRequestInfo, error: Error, + errorType: ToolErrorType | undefined, ): ToolCallResponseInfo => ({ callId: request.callId, error, @@ -212,6 +214,7 @@ const createErrorResponse = ( }, }, resultDisplay: error.message, + errorType, }); interface CoreToolSchedulerOptions { @@ -219,7 +222,6 @@ interface CoreToolSchedulerOptions { outputUpdateHandler?: OutputUpdateHandler; onAllToolCallsComplete?: AllToolCallsCompleteHandler; onToolCallsUpdate?: ToolCallsUpdateHandler; - approvalMode?: ApprovalMode; getPreferredEditor: () => EditorType | undefined; config: Config; } @@ -230,7 +232,6 @@ export class CoreToolScheduler { private outputUpdateHandler?: OutputUpdateHandler; private onAllToolCallsComplete?: AllToolCallsCompleteHandler; private onToolCallsUpdate?: ToolCallsUpdateHandler; - private approvalMode: ApprovalMode; private getPreferredEditor: () => EditorType | undefined; private config: Config; @@ -240,7 +241,6 @@ export class CoreToolScheduler { this.outputUpdateHandler = options.outputUpdateHandler; this.onAllToolCallsComplete = options.onAllToolCallsComplete; this.onToolCallsUpdate = options.onToolCallsUpdate; - this.approvalMode = options.approvalMode ?? ApprovalMode.DEFAULT; this.getPreferredEditor = options.getPreferredEditor; } @@ -369,6 +369,7 @@ export class CoreToolScheduler { }, resultDisplay, error: undefined, + errorType: undefined, }, durationMs, outcome, @@ -439,6 +440,7 @@ export class CoreToolScheduler { response: createErrorResponse( reqInfo, new Error(`Tool "${reqInfo.name}" not found in registry.`), + ToolErrorType.TOOL_NOT_REGISTERED, ), durationMs: 0, }; @@ -462,7 +464,7 @@ export class CoreToolScheduler { const { request: reqInfo, tool: toolInstance } = toolCall; try { - if (this.approvalMode === ApprovalMode.YOLO) { + if (this.config.getApprovalMode() === ApprovalMode.YOLO) { this.setStatusInternal(reqInfo.callId, 'scheduled'); } else { const confirmationDetails = await toolInstance.shouldConfirmExecute( @@ -502,6 +504,7 @@ export class CoreToolScheduler { createErrorResponse( reqInfo, error instanceof Error ? error : new Error(String(error)), + ToolErrorType.UNHANDLED_EXCEPTION, ), ); } @@ -673,19 +676,30 @@ export class CoreToolScheduler { return; } - const response = convertToFunctionResponse( - toolName, - callId, - toolResult.llmContent, - ); - const successResponse: ToolCallResponseInfo = { - callId, - responseParts: response, - resultDisplay: toolResult.returnDisplay, - error: undefined, - }; - - this.setStatusInternal(callId, 'success', successResponse); + if (toolResult.error === undefined) { + const response = convertToFunctionResponse( + toolName, + callId, + toolResult.llmContent, + ); + const successResponse: ToolCallResponseInfo = { + callId, + responseParts: response, + resultDisplay: toolResult.returnDisplay, + error: undefined, + errorType: undefined, + }; + this.setStatusInternal(callId, 'success', successResponse); + } else { + // It is a failure + const error = new Error(toolResult.error.message); + const errorResponse = createErrorResponse( + scheduledCall.request, + error, + toolResult.error.type, + ); + this.setStatusInternal(callId, 'error', errorResponse); + } }) .catch((executionError: Error) => { this.setStatusInternal( @@ -696,6 +710,7 @@ export class CoreToolScheduler { executionError instanceof Error ? executionError : new Error(String(executionError)), + ToolErrorType.UNHANDLED_EXCEPTION, ), ); }); diff --git a/packages/core/src/core/geminiChat.test.ts b/packages/core/src/core/geminiChat.test.ts index 39dd883e..cd5e3841 100644 --- a/packages/core/src/core/geminiChat.test.ts +++ b/packages/core/src/core/geminiChat.test.ts @@ -79,11 +79,14 @@ describe('GeminiChat', () => { await chat.sendMessage({ message: 'hello' }, 'prompt-id-1'); - expect(mockModelsModule.generateContent).toHaveBeenCalledWith({ - model: 'gemini-pro', - contents: [{ role: 'user', parts: [{ text: 'hello' }] }], - config: {}, - }); + expect(mockModelsModule.generateContent).toHaveBeenCalledWith( + { + model: 'gemini-pro', + contents: [{ role: 'user', parts: [{ text: 'hello' }] }], + config: {}, + }, + 'prompt-id-1', + ); }); }); @@ -111,11 +114,14 @@ describe('GeminiChat', () => { await chat.sendMessageStream({ message: 'hello' }, 'prompt-id-1'); - expect(mockModelsModule.generateContentStream).toHaveBeenCalledWith({ - model: 'gemini-pro', - contents: [{ role: 'user', parts: [{ text: 'hello' }] }], - config: {}, - }); + expect(mockModelsModule.generateContentStream).toHaveBeenCalledWith( + { + model: 'gemini-pro', + contents: [{ role: 'user', parts: [{ text: 'hello' }] }], + config: {}, + }, + 'prompt-id-1', + ); }); }); diff --git a/packages/core/src/core/geminiChat.ts b/packages/core/src/core/geminiChat.ts index 4c3cd4c8..bd81400f 100644 --- a/packages/core/src/core/geminiChat.ts +++ b/packages/core/src/core/geminiChat.ts @@ -225,6 +225,7 @@ export class GeminiChat { ); if (accepted !== false && accepted !== null) { this.config.setModel(fallbackModel); + this.config.setFallbackMode(true); return fallbackModel; } // Check if the model was switched manually in the handler @@ -286,11 +287,14 @@ export class GeminiChat { ); } - return this.contentGenerator.generateContent({ - model: modelToUse, - contents: requestContents, - config: { ...this.generationConfig, ...params.config }, - }); + return this.contentGenerator.generateContent( + { + model: modelToUse, + contents: requestContents, + config: { ...this.generationConfig, ...params.config }, + }, + prompt_id, + ); }; response = await retryWithBackoff(apiCall, { @@ -393,11 +397,14 @@ export class GeminiChat { ); } - return this.contentGenerator.generateContentStream({ - model: modelToUse, - contents: requestContents, - config: { ...this.generationConfig, ...params.config }, - }); + return this.contentGenerator.generateContentStream( + { + model: modelToUse, + contents: requestContents, + config: { ...this.generationConfig, ...params.config }, + }, + prompt_id, + ); }; // Note: Retrying streams can be complex. If generateContentStream itself doesn't handle retries diff --git a/packages/core/src/core/logger.test.ts b/packages/core/src/core/logger.test.ts index c64f4b6d..0633d11c 100644 --- a/packages/core/src/core/logger.test.ts +++ b/packages/core/src/core/logger.test.ts @@ -393,12 +393,16 @@ describe('Logger', () => { { role: 'model', parts: [{ text: 'Hi there' }] }, ]; - it('should save a checkpoint to a tagged file when a tag is provided', async () => { - const tag = 'my-test-tag'; + it.each([ + { tag: 'test-tag', sanitizedTag: 'test-tag' }, + { tag: 'invalid/?*!', sanitizedTag: 'invalid' }, + { tag: '/?*!', sanitizedTag: 'default' }, + { tag: '../../secret', sanitizedTag: 'secret' }, + ])('should save a checkpoint', async ({ tag, sanitizedTag }) => { await logger.saveCheckpoint(conversation, tag); const taggedFilePath = path.join( TEST_GEMINI_DIR, - `${CHECKPOINT_FILE_NAME.replace('.json', '')}-${tag}.json`, + `checkpoint-${sanitizedTag}.json`, ); const fileContent = await fs.readFile(taggedFilePath, 'utf-8'); expect(JSON.parse(fileContent)).toEqual(conversation); @@ -433,15 +437,19 @@ describe('Logger', () => { ); }); - it('should load from a tagged checkpoint file when a tag is provided', async () => { - const tag = 'my-load-tag'; + it.each([ + { tag: 'load-tag', sanitizedTag: 'load-tag' }, + { tag: 'inv/load?*!', sanitizedTag: 'invload' }, + { tag: '/?*!', sanitizedTag: 'default' }, + { tag: '../../secret', sanitizedTag: 'secret' }, + ])('should load from a checkpoint', async ({ tag, sanitizedTag }) => { const taggedConversation = [ ...conversation, - { role: 'user', parts: [{ text: 'Another message' }] }, + { role: 'user', parts: [{ text: 'hello' }] }, ]; const taggedFilePath = path.join( TEST_GEMINI_DIR, - `${CHECKPOINT_FILE_NAME.replace('.json', '')}-${tag}.json`, + `checkpoint-${sanitizedTag}.json`, ); await fs.writeFile( taggedFilePath, @@ -464,11 +472,16 @@ describe('Logger', () => { }); it('should return an empty array if the file contains invalid JSON', async () => { - await fs.writeFile(TEST_CHECKPOINT_FILE_PATH, 'invalid json'); + const tag = 'invalid-json-tag'; + const taggedFilePath = path.join( + TEST_GEMINI_DIR, + `checkpoint-${tag}.json`, + ); + await fs.writeFile(taggedFilePath, 'invalid json'); const consoleErrorSpy = vi .spyOn(console, 'error') .mockImplementation(() => {}); - const loadedCheckpoint = await logger.loadCheckpoint('missing'); + const loadedCheckpoint = await logger.loadCheckpoint(tag); expect(loadedCheckpoint).toEqual([]); expect(consoleErrorSpy).toHaveBeenCalledWith( expect.stringContaining('Failed to read or parse checkpoint file'), @@ -490,6 +503,68 @@ describe('Logger', () => { }); }); + describe('deleteCheckpoint', () => { + const conversation: Content[] = [ + { role: 'user', parts: [{ text: 'Content to be deleted' }] }, + ]; + const tag = 'delete-me'; + let taggedFilePath: string; + + beforeEach(async () => { + taggedFilePath = path.join( + TEST_GEMINI_DIR, + `${CHECKPOINT_FILE_NAME.replace('.json', '')}-${tag}.json`, + ); + // Create a file to be deleted + await fs.writeFile(taggedFilePath, JSON.stringify(conversation)); + }); + + it('should delete the specified checkpoint file and return true', async () => { + const result = await logger.deleteCheckpoint(tag); + expect(result).toBe(true); + + // Verify the file is actually gone + await expect(fs.access(taggedFilePath)).rejects.toThrow(/ENOENT/); + }); + + it('should return false if the checkpoint file does not exist', async () => { + const result = await logger.deleteCheckpoint('non-existent-tag'); + expect(result).toBe(false); + }); + + it('should re-throw an error if file deletion fails for reasons other than not existing', async () => { + // Simulate a different error (e.g., permission denied) + vi.spyOn(fs, 'unlink').mockRejectedValueOnce( + new Error('EACCES: permission denied'), + ); + const consoleErrorSpy = vi + .spyOn(console, 'error') + .mockImplementation(() => {}); + + await expect(logger.deleteCheckpoint(tag)).rejects.toThrow( + 'EACCES: permission denied', + ); + expect(consoleErrorSpy).toHaveBeenCalledWith( + `Failed to delete checkpoint file ${taggedFilePath}:`, + expect.any(Error), + ); + }); + + it('should return false if logger is not initialized', async () => { + const uninitializedLogger = new Logger(testSessionId); + uninitializedLogger.close(); + const consoleErrorSpy = vi + .spyOn(console, 'error') + .mockImplementation(() => {}); + + const result = await uninitializedLogger.deleteCheckpoint(tag); + expect(result).toBe(false); + expect(consoleErrorSpy).toHaveBeenCalledWith( + 'Logger not initialized or checkpoint file path not set. Cannot delete checkpoint.', + ); + }); + }); + describe('close', () => { it('should reset logger state', async () => { await logger.logMessage(MessageSenderType.USER, 'A message'); diff --git a/packages/core/src/core/logger.ts b/packages/core/src/core/logger.ts index c9124ac1..475c579d 100644 --- a/packages/core/src/core/logger.ts +++ b/packages/core/src/core/logger.ts @@ -239,12 +239,11 @@ export class Logger { throw new Error('Checkpoint file path not set.'); } // Sanitize tag to prevent directory traversal attacks - tag = tag.replace(/[^a-zA-Z0-9-_]/g, ''); - if (!tag) { - console.error('Sanitized tag is empty setting to "default".'); - tag = 'default'; + let sanitizedTag = tag.replace(/[^a-zA-Z0-9-_]/g, ''); + if (!sanitizedTag) { + sanitizedTag = 'default'; } - return path.join(this.qwenDir, `checkpoint-${tag}.json`); + return path.join(this.qwenDir, `checkpoint-${sanitizedTag}.json`); } async saveCheckpoint(conversation: Content[], tag: string): Promise { @@ -283,12 +282,31 @@ export class Logger { return parsedContent as Content[]; } catch (error) { console.error(`Failed to read or parse checkpoint file ${path}:`, error); + return []; + } + } + + async deleteCheckpoint(tag: string): Promise { + if (!this.initialized || !this.qwenDir) { + console.error( + 'Logger not initialized or checkpoint file path not set. Cannot delete checkpoint.', + ); + return false; + } + + const path = this._checkpointPath(tag); + + try { + await fs.unlink(path); + return true; + } catch (error) { const nodeError = error as NodeJS.ErrnoException; if (nodeError.code === 'ENOENT') { - // File doesn't exist, which is fine. Return empty array. - return []; + // File doesn't exist, which is fine. + return false; } - return []; + console.error(`Failed to delete checkpoint file ${path}:`, error); + throw error; } } diff --git a/packages/core/src/core/nonInteractiveToolExecutor.ts b/packages/core/src/core/nonInteractiveToolExecutor.ts index ab001bd6..52704bf1 100644 --- a/packages/core/src/core/nonInteractiveToolExecutor.ts +++ b/packages/core/src/core/nonInteractiveToolExecutor.ts @@ -8,6 +8,7 @@ import { logToolCall, ToolCallRequestInfo, ToolCallResponseInfo, + ToolErrorType, ToolRegistry, ToolResult, } from '../index.js'; @@ -56,6 +57,7 @@ export async function executeToolCall( ], resultDisplay: error.message, error, + errorType: ToolErrorType.TOOL_NOT_REGISTERED, }; } @@ -79,7 +81,11 @@ export async function executeToolCall( function_name: toolCallRequest.name, function_args: toolCallRequest.args, duration_ms: durationMs, - success: true, + success: toolResult.error === undefined, + error: + toolResult.error === undefined ? undefined : toolResult.error.message, + error_type: + toolResult.error === undefined ? undefined : toolResult.error.type, prompt_id: toolCallRequest.prompt_id, }); @@ -93,7 +99,12 @@ export async function executeToolCall( callId: toolCallRequest.callId, responseParts: response, resultDisplay: tool_display, - error: undefined, + error: + toolResult.error === undefined + ? undefined + : new Error(toolResult.error.message), + errorType: + toolResult.error === undefined ? undefined : toolResult.error.type, }; } catch (e) { const error = e instanceof Error ? e : new Error(String(e)); @@ -106,6 +117,7 @@ export async function executeToolCall( duration_ms: durationMs, success: false, error: error.message, + error_type: ToolErrorType.UNHANDLED_EXCEPTION, prompt_id: toolCallRequest.prompt_id, }); return { @@ -121,6 +133,7 @@ export async function executeToolCall( ], resultDisplay: error.message, error, + errorType: ToolErrorType.UNHANDLED_EXCEPTION, }; } } diff --git a/packages/core/src/core/prompts.ts b/packages/core/src/core/prompts.ts index f40fc0b6..16766fa9 100644 --- a/packages/core/src/core/prompts.ts +++ b/packages/core/src/core/prompts.ts @@ -185,7 +185,7 @@ When requested to perform tasks like fixing bugs, adding features, refactoring, - **Command Execution:** Use the '${ShellTool.Name}' tool for running shell commands, remembering the safety rule to explain modifying commands first. - **Background Processes:** Use background processes (via \`&\`) for commands that are unlikely to stop on their own, e.g. \`node server.js &\`. If unsure, ask the user. - **Interactive Commands:** Try to avoid shell commands that are likely to require user interaction (e.g. \`git rebase -i\`). Use non-interactive versions of commands (e.g. \`npm init -y\` instead of \`npm init\`) when available, and otherwise remind the user that interactive shell commands are not supported and may cause hangs until canceled by the user. -- **Remembering Facts:** Use the '${MemoryTool.Name}' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information that belongs in project-specific \`GEMINI.md\` files. If unsure whether to save something, you can ask the user, "Should I remember that for you?" +- **Remembering Facts:** Use the '${MemoryTool.Name}' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information. If unsure whether to save something, you can ask the user, "Should I remember that for you?" - **Respect User Confirmations:** Most tool calls (also denoted as 'function calls') will first require confirmation from the user, where they will either approve or cancel the function call. If a user cancels a function call, respect their choice and do _not_ try to make the function call again. It is okay to request the tool call again _only_ if the user requests that same tool call on a subsequent prompt. When a user cancels a function call, assume best intentions from the user and consider inquiring if they prefer any alternative paths forward. ## Interaction Details diff --git a/packages/core/src/core/tokenLimits.ts b/packages/core/src/core/tokenLimits.ts index 1c7fbde9..d238cdb3 100644 --- a/packages/core/src/core/tokenLimits.ts +++ b/packages/core/src/core/tokenLimits.ts @@ -21,6 +21,7 @@ export function tokenLimit(model: Model): TokenCount { case 'gemini-2.5-pro': case 'gemini-2.5-flash-preview-05-20': case 'gemini-2.5-flash': + case 'gemini-2.5-flash-lite': case 'gemini-2.0-flash': return 1_048_576; case 'gemini-2.0-flash-preview-image-generation': diff --git a/packages/core/src/core/turn.ts b/packages/core/src/core/turn.ts index 6feb75b7..c726bc73 100644 --- a/packages/core/src/core/turn.ts +++ b/packages/core/src/core/turn.ts @@ -16,6 +16,7 @@ import { ToolResult, ToolResultDisplay, } from '../tools/tools.js'; +import { ToolErrorType } from '../tools/tool-error.js'; import { getResponseText } from '../utils/generateContentResponseUtilities.js'; import { reportError } from '../utils/errorReporting.js'; import { @@ -83,6 +84,7 @@ export interface ToolCallResponseInfo { responseParts: PartListUnion; resultDisplay: ToolResultDisplay | undefined; error: Error | undefined; + errorType: ToolErrorType | undefined; } export interface ServerToolCallConfirmationDetails { @@ -176,6 +178,7 @@ export type ServerGeminiStreamEvent = export class Turn { readonly pendingToolCalls: ToolCallRequestInfo[]; private debugResponses: GenerateContentResponse[]; + finishReason: FinishReason | undefined; constructor( private readonly chat: GeminiChat, @@ -183,6 +186,7 @@ export class Turn { ) { this.pendingToolCalls = []; this.debugResponses = []; + this.finishReason = undefined; } // The run method yields simpler events suitable for server logic async *run( @@ -248,6 +252,7 @@ export class Turn { const finishReason = resp.candidates?.[0]?.finishReason; if (finishReason) { + this.finishReason = finishReason; yield { type: GeminiEventType.Finished, value: finishReason as FinishReason, diff --git a/packages/core/src/ide/detect-ide.ts b/packages/core/src/ide/detect-ide.ts new file mode 100644 index 00000000..f3d8cc63 --- /dev/null +++ b/packages/core/src/ide/detect-ide.ts @@ -0,0 +1,28 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +export enum DetectedIde { + VSCode = 'vscode', +} + +export function getIdeDisplayName(ide: DetectedIde): string { + switch (ide) { + case DetectedIde.VSCode: + return 'VS Code'; + default: { + // This ensures that if a new IDE is added to the enum, we get a compile-time error. + const exhaustiveCheck: never = ide; + return exhaustiveCheck; + } + } +} + +export function detectIde(): DetectedIde | undefined { + if (process.env.TERM_PROGRAM === 'vscode') { + return DetectedIde.VSCode; + } + return undefined; +} diff --git a/packages/core/src/ide/ide-client.ts b/packages/core/src/ide/ide-client.ts index 3f91f386..be24db3e 100644 --- a/packages/core/src/ide/ide-client.ts +++ b/packages/core/src/ide/ide-client.ts @@ -4,7 +4,12 @@ * SPDX-License-Identifier: Apache-2.0 */ -import { ideContext, OpenFilesNotificationSchema } from '../ide/ideContext.js'; +import { + detectIde, + DetectedIde, + getIdeDisplayName, +} from '../ide/detect-ide.js'; +import { ideContext, IdeContextNotificationSchema } from '../ide/ideContext.js'; import { Client } from '@modelcontextprotocol/sdk/client/index.js'; import { StreamableHTTPClientTransport } from '@modelcontextprotocol/sdk/client/streamableHttp.js'; @@ -15,7 +20,7 @@ const logger = { export type IDEConnectionState = { status: IDEConnectionStatus; - details?: string; + details?: string; // User-facing }; export enum IDEConnectionStatus { @@ -29,41 +34,107 @@ export enum IDEConnectionStatus { */ export class IdeClient { client: Client | undefined = undefined; - connectionStatus: IDEConnectionStatus = IDEConnectionStatus.Disconnected; + private state: IDEConnectionState = { + status: IDEConnectionStatus.Disconnected, + }; + private static instance: IdeClient; + private readonly currentIde: DetectedIde | undefined; + private readonly currentIdeDisplayName: string | undefined; - constructor() { - this.connectToMcpServer().catch((err) => { + constructor(ideMode: boolean) { + this.currentIde = detectIde(); + if (this.currentIde) { + this.currentIdeDisplayName = getIdeDisplayName(this.currentIde); + } + if (!ideMode) { + return; + } + this.init().catch((err) => { logger.debug('Failed to initialize IdeClient:', err); }); } - getConnectionStatus(): { - status: IDEConnectionStatus; - details?: string; - } { - let details: string | undefined; - if (this.connectionStatus === IDEConnectionStatus.Disconnected) { - if (!process.env['GEMINI_CLI_IDE_SERVER_PORT']) { - details = 'GEMINI_CLI_IDE_SERVER_PORT environment variable is not set.'; - } + static getInstance(ideMode: boolean): IdeClient { + if (!IdeClient.instance) { + IdeClient.instance = new IdeClient(ideMode); } - return { - status: this.connectionStatus, - details, - }; + return IdeClient.instance; } - async connectToMcpServer(): Promise { - this.connectionStatus = IDEConnectionStatus.Connecting; - const idePort = process.env['GEMINI_CLI_IDE_SERVER_PORT']; - if (!idePort) { - logger.debug( - 'Unable to connect to IDE mode MCP server. GEMINI_CLI_IDE_SERVER_PORT environment variable is not set.', + getCurrentIde(): DetectedIde | undefined { + return this.currentIde; + } + + getConnectionStatus(): IDEConnectionState { + return this.state; + } + + private setState(status: IDEConnectionStatus, details?: string) { + this.state = { status, details }; + + if (status === IDEConnectionStatus.Disconnected) { + logger.debug('IDE integration is disconnected. ', details); + ideContext.clearIdeContext(); + } + } + + private getPortFromEnv(): string | undefined { + const port = process.env['GEMINI_CLI_IDE_SERVER_PORT']; + if (!port) { + this.setState( + IDEConnectionStatus.Disconnected, + 'Gemini CLI Companion extension not found. Install via /ide install and restart the CLI in a fresh terminal window.', ); - this.connectionStatus = IDEConnectionStatus.Disconnected; + return undefined; + } + return port; + } + + private validateWorkspacePath(): boolean { + const ideWorkspacePath = process.env['GEMINI_CLI_IDE_WORKSPACE_PATH']; + if (!ideWorkspacePath) { + this.setState( + IDEConnectionStatus.Disconnected, + 'IDE integration requires a single workspace folder to be open in the IDE. Please ensure one folder is open and try again.', + ); + return false; + } + if (ideWorkspacePath !== process.cwd()) { + this.setState( + IDEConnectionStatus.Disconnected, + `Gemini CLI is running in a different directory (${process.cwd()}) from the IDE's open workspace (${ideWorkspacePath}). Please run Gemini CLI in the same directory.`, + ); + return false; + } + return true; + } + + private registerClientHandlers() { + if (!this.client) { return; } + this.client.setNotificationHandler( + IdeContextNotificationSchema, + (notification) => { + ideContext.setIdeContext(notification.params); + }, + ); + + this.client.onerror = (_error) => { + this.setState(IDEConnectionStatus.Disconnected, 'Client error.'); + }; + + this.client.onclose = () => { + this.setState(IDEConnectionStatus.Disconnected, 'Connection closed.'); + }; + } + + async reconnect(ideMode: boolean) { + IdeClient.instance = new IdeClient(ideMode); + } + + private async establishConnection(port: string) { let transport: StreamableHTTPClientTransport | undefined; try { this.client = new Client({ @@ -71,32 +142,21 @@ export class IdeClient { // TODO(#3487): use the CLI version here. version: '1.0.0', }); + transport = new StreamableHTTPClientTransport( - new URL(`http://localhost:${idePort}/mcp`), + new URL(`http://localhost:${port}/mcp`), ); + + this.registerClientHandlers(); + await this.client.connect(transport); - this.client.setNotificationHandler( - OpenFilesNotificationSchema, - (notification) => { - ideContext.setOpenFilesContext(notification.params); - }, - ); - this.client.onerror = (error) => { - logger.debug('IDE MCP client error:', error); - this.connectionStatus = IDEConnectionStatus.Disconnected; - ideContext.clearOpenFilesContext(); - }; - this.client.onclose = () => { - logger.debug('IDE MCP client connection closed.'); - this.connectionStatus = IDEConnectionStatus.Disconnected; - ideContext.clearOpenFilesContext(); - }; - - this.connectionStatus = IDEConnectionStatus.Connected; + this.setState(IDEConnectionStatus.Connected); } catch (error) { - this.connectionStatus = IDEConnectionStatus.Disconnected; - logger.debug('Failed to connect to MCP server:', error); + this.setState( + IDEConnectionStatus.Disconnected, + `Failed to connect to IDE server: ${error}`, + ); if (transport) { try { await transport.close(); @@ -106,4 +166,42 @@ export class IdeClient { } } } + + async init(): Promise { + if (this.state.status === IDEConnectionStatus.Connected) { + return; + } + if (!this.currentIde) { + this.setState( + IDEConnectionStatus.Disconnected, + 'Not running in a supported IDE, skipping connection.', + ); + return; + } + + this.setState(IDEConnectionStatus.Connecting); + + if (!this.validateWorkspacePath()) { + return; + } + + const port = this.getPortFromEnv(); + if (!port) { + return; + } + + await this.establishConnection(port); + } + + dispose() { + this.client?.close(); + } + + getDetectedIdeDisplayName(): string | undefined { + return this.currentIdeDisplayName; + } + + setDisconnected() { + this.setState(IDEConnectionStatus.Disconnected); + } } diff --git a/packages/core/src/ide/ide-installer.test.ts b/packages/core/src/ide/ide-installer.test.ts new file mode 100644 index 00000000..698c3173 --- /dev/null +++ b/packages/core/src/ide/ide-installer.test.ts @@ -0,0 +1,62 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import { vi, describe, it, expect, beforeEach, afterEach } from 'vitest'; +import { getIdeInstaller, IdeInstaller } from './ide-installer.js'; +import * as child_process from 'child_process'; +import * as fs from 'fs'; +import * as os from 'os'; +import { DetectedIde } from './detect-ide.js'; + +vi.mock('child_process'); +vi.mock('fs'); +vi.mock('os'); + +describe('ide-installer', () => { + describe('getIdeInstaller', () => { + it('should return a VsCodeInstaller for "vscode"', () => { + const installer = getIdeInstaller(DetectedIde.VSCode); + expect(installer).not.toBeNull(); + // A more specific check might be needed if we export the class + expect(installer).toBeInstanceOf(Object); + }); + + it('should return null for an unknown IDE', () => { + const installer = getIdeInstaller('unknown' as DetectedIde); + expect(installer).toBeNull(); + }); + }); + + describe('VsCodeInstaller', () => { + let installer: IdeInstaller; + + beforeEach(() => { + // We get a new installer for each test to reset the find command logic + installer = getIdeInstaller(DetectedIde.VSCode)!; + vi.spyOn(child_process, 'execSync').mockImplementation(() => ''); + vi.spyOn(fs, 'existsSync').mockReturnValue(false); + vi.spyOn(os, 'homedir').mockReturnValue('/home/user'); + }); + + afterEach(() => { + vi.restoreAllMocks(); + }); + + describe('install', () => { + it('should return a failure message if VS Code is not installed', async () => { + vi.spyOn(child_process, 'execSync').mockImplementation(() => { + throw new Error('Command not found'); + }); + vi.spyOn(fs, 'existsSync').mockReturnValue(false); + // Re-create the installer so it re-runs findVsCodeCommand + installer = getIdeInstaller(DetectedIde.VSCode)!; + const result = await installer.install(); + expect(result.success).toBe(false); + expect(result.message).toContain('VS Code CLI not found'); + }); + }); + }); +}); diff --git a/packages/core/src/ide/ide-installer.ts b/packages/core/src/ide/ide-installer.ts new file mode 100644 index 00000000..7db8e2d2 --- /dev/null +++ b/packages/core/src/ide/ide-installer.ts @@ -0,0 +1,157 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import * as child_process from 'child_process'; +import * as process from 'process'; +import { glob } from 'glob'; +import * as path from 'path'; +import * as fs from 'fs'; +import * as os from 'os'; +import { fileURLToPath } from 'url'; +import { DetectedIde } from './detect-ide.js'; + +const VSCODE_COMMAND = process.platform === 'win32' ? 'code.cmd' : 'code'; +const VSCODE_COMPANION_EXTENSION_FOLDER = 'vscode-ide-companion'; + +export interface IdeInstaller { + install(): Promise; +} + +export interface InstallResult { + success: boolean; + message: string; +} + +async function findVsCodeCommand(): Promise { + // 1. Check PATH first. + try { + child_process.execSync( + process.platform === 'win32' + ? `where.exe ${VSCODE_COMMAND}` + : `command -v ${VSCODE_COMMAND}`, + { stdio: 'ignore' }, + ); + return VSCODE_COMMAND; + } catch { + // Not in PATH, continue to check common locations. + } + + // 2. Check common installation locations. + const locations: string[] = []; + const platform = process.platform; + const homeDir = os.homedir(); + + if (platform === 'darwin') { + // macOS + locations.push( + '/Applications/Visual Studio Code.app/Contents/Resources/app/bin/code', + path.join(homeDir, 'Library/Application Support/Code/bin/code'), + ); + } else if (platform === 'linux') { + // Linux + locations.push( + '/usr/share/code/bin/code', + '/snap/bin/code', + path.join(homeDir, '.local/share/code/bin/code'), + ); + } else if (platform === 'win32') { + // Windows + locations.push( + path.join( + process.env.ProgramFiles || 'C:\\Program Files', + 'Microsoft VS Code', + 'bin', + 'code.cmd', + ), + path.join( + homeDir, + 'AppData', + 'Local', + 'Programs', + 'Microsoft VS Code', + 'bin', + 'code.cmd', + ), + ); + } + + for (const location of locations) { + if (fs.existsSync(location)) { + return location; + } + } + + return null; +} + +class VsCodeInstaller implements IdeInstaller { + private vsCodeCommand: Promise; + + constructor() { + this.vsCodeCommand = findVsCodeCommand(); + } + + async install(): Promise { + const commandPath = await this.vsCodeCommand; + if (!commandPath) { + return { + success: false, + message: `VS Code CLI not found. Please ensure 'code' is in your system's PATH. For help, see https://code.visualstudio.com/docs/configure/command-line#_code-is-not-recognized-as-an-internal-or-external-command. You can also install the companion extension manually from the VS Code marketplace.`, + }; + } + + const bundleDir = path.dirname(fileURLToPath(import.meta.url)); + // The VSIX file is copied to the bundle directory as part of the build. + let vsixFiles = glob.sync(path.join(bundleDir, '*.vsix')); + if (vsixFiles.length === 0) { + // If the VSIX file is not in the bundle, it might be a dev + // environment running with `npm start`. Look for it in the original + // package location, relative to the bundle dir. + const devPath = path.join( + bundleDir, // .../packages/core/dist/src/ide + '..', // .../packages/core/dist/src + '..', // .../packages/core/dist + '..', // .../packages/core + '..', // .../packages + VSCODE_COMPANION_EXTENSION_FOLDER, + '*.vsix', + ); + vsixFiles = glob.sync(devPath); + } + if (vsixFiles.length === 0) { + return { + success: false, + message: + 'Could not find the required VS Code companion extension. Please file a bug via /bug.', + }; + } + + const vsixPath = vsixFiles[0]; + const command = `"${commandPath}" --install-extension "${vsixPath}" --force`; + try { + child_process.execSync(command, { stdio: 'pipe' }); + return { + success: true, + message: + 'VS Code companion extension was installed successfully. Please restart your terminal to complete the setup.', + }; + } catch (_error) { + return { + success: false, + message: `Failed to install VS Code companion extension. Please try installing it manually from the VS Code marketplace.`, + }; + } + } +} + +export function getIdeInstaller(ide: DetectedIde): IdeInstaller | null { + switch (ide) { + case DetectedIde.VSCode: + return new VsCodeInstaller(); + default: + return null; + } +} diff --git a/packages/core/src/ide/ideContext.test.ts b/packages/core/src/ide/ideContext.test.ts index 1cb09c53..7e01d3aa 100644 --- a/packages/core/src/ide/ideContext.test.ts +++ b/packages/core/src/ide/ideContext.test.ts @@ -5,136 +5,300 @@ */ import { describe, it, expect, beforeEach, vi } from 'vitest'; -import { createIdeContextStore } from './ideContext.js'; +import { + createIdeContextStore, + FileSchema, + IdeContextSchema, +} from './ideContext.js'; -describe('ideContext - Active File', () => { - let ideContext: ReturnType; +describe('ideContext', () => { + describe('createIdeContextStore', () => { + let ideContext: ReturnType; - beforeEach(() => { - // Create a fresh, isolated instance for each test - ideContext = createIdeContextStore(); - }); - - it('should return undefined initially for active file context', () => { - expect(ideContext.getOpenFilesContext()).toBeUndefined(); - }); - - it('should set and retrieve the active file context', () => { - const testFile = { - activeFile: '/path/to/test/file.ts', - selectedText: '1234', - }; - - ideContext.setOpenFilesContext(testFile); - - const activeFile = ideContext.getOpenFilesContext(); - expect(activeFile).toEqual(testFile); - }); - - it('should update the active file context when called multiple times', () => { - const firstFile = { - activeFile: '/path/to/first.js', - selectedText: '1234', - }; - ideContext.setOpenFilesContext(firstFile); - - const secondFile = { - activeFile: '/path/to/second.py', - cursor: { line: 20, character: 30 }, - }; - ideContext.setOpenFilesContext(secondFile); - - const activeFile = ideContext.getOpenFilesContext(); - expect(activeFile).toEqual(secondFile); - }); - - it('should handle empty string for file path', () => { - const testFile = { - activeFile: '', - selectedText: '1234', - }; - ideContext.setOpenFilesContext(testFile); - expect(ideContext.getOpenFilesContext()).toEqual(testFile); - }); - - it('should notify subscribers when active file context changes', () => { - const subscriber1 = vi.fn(); - const subscriber2 = vi.fn(); - - ideContext.subscribeToOpenFiles(subscriber1); - ideContext.subscribeToOpenFiles(subscriber2); - - const testFile = { - activeFile: '/path/to/subscribed.ts', - cursor: { line: 15, character: 25 }, - }; - ideContext.setOpenFilesContext(testFile); - - expect(subscriber1).toHaveBeenCalledTimes(1); - expect(subscriber1).toHaveBeenCalledWith(testFile); - expect(subscriber2).toHaveBeenCalledTimes(1); - expect(subscriber2).toHaveBeenCalledWith(testFile); - - // Test with another update - const newFile = { - activeFile: '/path/to/new.js', - selectedText: '1234', - }; - ideContext.setOpenFilesContext(newFile); - - expect(subscriber1).toHaveBeenCalledTimes(2); - expect(subscriber1).toHaveBeenCalledWith(newFile); - expect(subscriber2).toHaveBeenCalledTimes(2); - expect(subscriber2).toHaveBeenCalledWith(newFile); - }); - - it('should stop notifying a subscriber after unsubscribe', () => { - const subscriber1 = vi.fn(); - const subscriber2 = vi.fn(); - - const unsubscribe1 = ideContext.subscribeToOpenFiles(subscriber1); - ideContext.subscribeToOpenFiles(subscriber2); - - ideContext.setOpenFilesContext({ - activeFile: '/path/to/file1.txt', - selectedText: '1234', + beforeEach(() => { + // Create a fresh, isolated instance for each test + ideContext = createIdeContextStore(); }); - expect(subscriber1).toHaveBeenCalledTimes(1); - expect(subscriber2).toHaveBeenCalledTimes(1); - unsubscribe1(); - - ideContext.setOpenFilesContext({ - activeFile: '/path/to/file2.txt', - selectedText: '1234', + it('should return undefined initially for ide context', () => { + expect(ideContext.getIdeContext()).toBeUndefined(); + }); + + it('should set and retrieve the ide context', () => { + const testFile = { + workspaceState: { + openFiles: [ + { + path: '/path/to/test/file.ts', + isActive: true, + selectedText: '1234', + timestamp: 0, + }, + ], + }, + }; + + ideContext.setIdeContext(testFile); + + const activeFile = ideContext.getIdeContext(); + expect(activeFile).toEqual(testFile); + }); + + it('should update the ide context when called multiple times', () => { + const firstFile = { + workspaceState: { + openFiles: [ + { + path: '/path/to/first.js', + isActive: true, + selectedText: '1234', + timestamp: 0, + }, + ], + }, + }; + ideContext.setIdeContext(firstFile); + + const secondFile = { + workspaceState: { + openFiles: [ + { + path: '/path/to/second.py', + isActive: true, + cursor: { line: 20, character: 30 }, + timestamp: 0, + }, + ], + }, + }; + ideContext.setIdeContext(secondFile); + + const activeFile = ideContext.getIdeContext(); + expect(activeFile).toEqual(secondFile); + }); + + it('should handle empty string for file path', () => { + const testFile = { + workspaceState: { + openFiles: [ + { + path: '', + isActive: true, + selectedText: '1234', + timestamp: 0, + }, + ], + }, + }; + ideContext.setIdeContext(testFile); + expect(ideContext.getIdeContext()).toEqual(testFile); + }); + + it('should notify subscribers when ide context changes', () => { + const subscriber1 = vi.fn(); + const subscriber2 = vi.fn(); + + ideContext.subscribeToIdeContext(subscriber1); + ideContext.subscribeToIdeContext(subscriber2); + + const testFile = { + workspaceState: { + openFiles: [ + { + path: '/path/to/subscribed.ts', + isActive: true, + cursor: { line: 15, character: 25 }, + timestamp: 0, + }, + ], + }, + }; + ideContext.setIdeContext(testFile); + + expect(subscriber1).toHaveBeenCalledTimes(1); + expect(subscriber1).toHaveBeenCalledWith(testFile); + expect(subscriber2).toHaveBeenCalledTimes(1); + expect(subscriber2).toHaveBeenCalledWith(testFile); + + // Test with another update + const newFile = { + workspaceState: { + openFiles: [ + { + path: '/path/to/new.js', + isActive: true, + selectedText: '1234', + timestamp: 0, + }, + ], + }, + }; + ideContext.setIdeContext(newFile); + + expect(subscriber1).toHaveBeenCalledTimes(2); + expect(subscriber1).toHaveBeenCalledWith(newFile); + expect(subscriber2).toHaveBeenCalledTimes(2); + expect(subscriber2).toHaveBeenCalledWith(newFile); + }); + + it('should stop notifying a subscriber after unsubscribe', () => { + const subscriber1 = vi.fn(); + const subscriber2 = vi.fn(); + + const unsubscribe1 = ideContext.subscribeToIdeContext(subscriber1); + ideContext.subscribeToIdeContext(subscriber2); + + ideContext.setIdeContext({ + workspaceState: { + openFiles: [ + { + path: '/path/to/file1.txt', + isActive: true, + selectedText: '1234', + timestamp: 0, + }, + ], + }, + }); + expect(subscriber1).toHaveBeenCalledTimes(1); + expect(subscriber2).toHaveBeenCalledTimes(1); + + unsubscribe1(); + + ideContext.setIdeContext({ + workspaceState: { + openFiles: [ + { + path: '/path/to/file2.txt', + isActive: true, + selectedText: '1234', + timestamp: 0, + }, + ], + }, + }); + expect(subscriber1).toHaveBeenCalledTimes(1); // Should not be called again + expect(subscriber2).toHaveBeenCalledTimes(2); + }); + + it('should clear the ide context', () => { + const testFile = { + workspaceState: { + openFiles: [ + { + path: '/path/to/test/file.ts', + isActive: true, + selectedText: '1234', + timestamp: 0, + }, + ], + }, + }; + + ideContext.setIdeContext(testFile); + + expect(ideContext.getIdeContext()).toEqual(testFile); + + ideContext.clearIdeContext(); + + expect(ideContext.getIdeContext()).toBeUndefined(); }); - expect(subscriber1).toHaveBeenCalledTimes(1); // Should not be called again - expect(subscriber2).toHaveBeenCalledTimes(2); }); - it('should allow the cursor to be optional', () => { - const testFile = { - activeFile: '/path/to/test/file.ts', - }; + describe('FileSchema', () => { + it('should validate a file with only required fields', () => { + const file = { + path: '/path/to/file.ts', + timestamp: 12345, + }; + const result = FileSchema.safeParse(file); + expect(result.success).toBe(true); + }); - ideContext.setOpenFilesContext(testFile); + it('should validate a file with all fields', () => { + const file = { + path: '/path/to/file.ts', + timestamp: 12345, + isActive: true, + selectedText: 'const x = 1;', + cursor: { + line: 10, + character: 20, + }, + }; + const result = FileSchema.safeParse(file); + expect(result.success).toBe(true); + }); - const activeFile = ideContext.getOpenFilesContext(); - expect(activeFile).toEqual(testFile); + it('should fail validation if path is missing', () => { + const file = { + timestamp: 12345, + }; + const result = FileSchema.safeParse(file); + expect(result.success).toBe(false); + }); + + it('should fail validation if timestamp is missing', () => { + const file = { + path: '/path/to/file.ts', + }; + const result = FileSchema.safeParse(file); + expect(result.success).toBe(false); + }); }); - it('should clear the active file context', () => { - const testFile = { - activeFile: '/path/to/test/file.ts', - selectedText: '1234', - }; + describe('IdeContextSchema', () => { + it('should validate an empty context', () => { + const context = {}; + const result = IdeContextSchema.safeParse(context); + expect(result.success).toBe(true); + }); - ideContext.setOpenFilesContext(testFile); + it('should validate a context with an empty workspaceState', () => { + const context = { + workspaceState: {}, + }; + const result = IdeContextSchema.safeParse(context); + expect(result.success).toBe(true); + }); - expect(ideContext.getOpenFilesContext()).toEqual(testFile); + it('should validate a context with an empty openFiles array', () => { + const context = { + workspaceState: { + openFiles: [], + }, + }; + const result = IdeContextSchema.safeParse(context); + expect(result.success).toBe(true); + }); - ideContext.clearOpenFilesContext(); + it('should validate a context with a valid file', () => { + const context = { + workspaceState: { + openFiles: [ + { + path: '/path/to/file.ts', + timestamp: 12345, + }, + ], + }, + }; + const result = IdeContextSchema.safeParse(context); + expect(result.success).toBe(true); + }); - expect(ideContext.getOpenFilesContext()).toBeUndefined(); + it('should fail validation with an invalid file', () => { + const context = { + workspaceState: { + openFiles: [ + { + timestamp: 12345, // path is missing + }, + ], + }, + }; + const result = IdeContextSchema.safeParse(context); + expect(result.success).toBe(false); + }); }); }); diff --git a/packages/core/src/ide/ideContext.ts b/packages/core/src/ide/ideContext.ts index bc7383a1..588e25ee 100644 --- a/packages/core/src/ide/ideContext.ts +++ b/packages/core/src/ide/ideContext.ts @@ -7,97 +7,96 @@ import { z } from 'zod'; /** - * Zod schema for validating a cursor position. + * Zod schema for validating a file context from the IDE. */ -export const CursorSchema = z.object({ - line: z.number(), - character: z.number(), -}); -export type Cursor = z.infer; - -/** - * Zod schema for validating an active file context from the IDE. - */ -export const OpenFilesSchema = z.object({ - activeFile: z.string(), +export const FileSchema = z.object({ + path: z.string(), + timestamp: z.number(), + isActive: z.boolean().optional(), selectedText: z.string().optional(), - cursor: CursorSchema.optional(), - recentOpenFiles: z - .array( - z.object({ - filePath: z.string(), - timestamp: z.number(), - }), - ) + cursor: z + .object({ + line: z.number(), + character: z.number(), + }) .optional(), }); -export type OpenFiles = z.infer; +export type File = z.infer; + +export const IdeContextSchema = z.object({ + workspaceState: z + .object({ + openFiles: z.array(FileSchema).optional(), + }) + .optional(), +}); +export type IdeContext = z.infer; /** - * Zod schema for validating the 'ide/openFilesChanged' notification from the IDE. + * Zod schema for validating the 'ide/contextUpdate' notification from the IDE. */ -export const OpenFilesNotificationSchema = z.object({ - method: z.literal('ide/openFilesChanged'), - params: OpenFilesSchema, +export const IdeContextNotificationSchema = z.object({ + method: z.literal('ide/contextUpdate'), + params: IdeContextSchema, }); -type OpenFilesSubscriber = (openFiles: OpenFiles | undefined) => void; +type IdeContextSubscriber = (ideContext: IdeContext | undefined) => void; /** - * Creates a new store for managing the IDE's active file context. + * Creates a new store for managing the IDE's context. * This factory function encapsulates the state and logic, allowing for the creation * of isolated instances, which is particularly useful for testing. * - * @returns An object with methods to interact with the active file context. + * @returns An object with methods to interact with the IDE context. */ export function createIdeContextStore() { - let openFilesContext: OpenFiles | undefined = undefined; - const subscribers = new Set(); + let ideContextState: IdeContext | undefined = undefined; + const subscribers = new Set(); /** - * Notifies all registered subscribers about the current active file context. + * Notifies all registered subscribers about the current IDE context. */ function notifySubscribers(): void { for (const subscriber of subscribers) { - subscriber(openFilesContext); + subscriber(ideContextState); } } /** - * Sets the active file context and notifies all registered subscribers of the change. - * @param newOpenFiles The new active file context from the IDE. + * Sets the IDE context and notifies all registered subscribers of the change. + * @param newIdeContext The new IDE context from the IDE. */ - function setOpenFilesContext(newOpenFiles: OpenFiles): void { - openFilesContext = newOpenFiles; + function setIdeContext(newIdeContext: IdeContext): void { + ideContextState = newIdeContext; notifySubscribers(); } /** - * Clears the active file context and notifies all registered subscribers of the change. + * Clears the IDE context and notifies all registered subscribers of the change. */ - function clearOpenFilesContext(): void { - openFilesContext = undefined; + function clearIdeContext(): void { + ideContextState = undefined; notifySubscribers(); } /** - * Retrieves the current active file context. - * @returns The `OpenFiles` object if a file is active; otherwise, `undefined`. + * Retrieves the current IDE context. + * @returns The `IdeContext` object if a file is active; otherwise, `undefined`. */ - function getOpenFilesContext(): OpenFiles | undefined { - return openFilesContext; + function getIdeContext(): IdeContext | undefined { + return ideContextState; } /** - * Subscribes to changes in the active file context. + * Subscribes to changes in the IDE context. * - * When the active file context changes, the provided `subscriber` function will be called. + * When the IDE context changes, the provided `subscriber` function will be called. * Note: The subscriber is not called with the current value upon subscription. * - * @param subscriber The function to be called when the active file context changes. + * @param subscriber The function to be called when the IDE context changes. * @returns A function that, when called, will unsubscribe the provided subscriber. */ - function subscribeToOpenFiles(subscriber: OpenFilesSubscriber): () => void { + function subscribeToIdeContext(subscriber: IdeContextSubscriber): () => void { subscribers.add(subscriber); return () => { subscribers.delete(subscriber); @@ -105,10 +104,10 @@ export function createIdeContextStore() { } return { - setOpenFilesContext, - getOpenFilesContext, - subscribeToOpenFiles, - clearOpenFilesContext, + setIdeContext, + getIdeContext, + subscribeToIdeContext, + clearIdeContext, }; } diff --git a/packages/core/src/index.ts b/packages/core/src/index.ts index 34f25d21..fb322722 100644 --- a/packages/core/src/index.ts +++ b/packages/core/src/index.ts @@ -31,6 +31,7 @@ export * from './utils/errors.js'; export * from './utils/getFolderStructure.js'; export * from './utils/memoryDiscovery.js'; export * from './utils/gitIgnoreParser.js'; +export * from './utils/gitUtils.js'; export * from './utils/editor.js'; export * from './utils/quotaErrorDetection.js'; export * from './utils/fileUtils.js'; @@ -47,12 +48,15 @@ export * from './services/gitService.js'; // Export IDE specific logic export * from './ide/ide-client.js'; export * from './ide/ideContext.js'; +export * from './ide/ide-installer.js'; +export { getIdeDisplayName, DetectedIde } from './ide/detect-ide.js'; // Export Shell Execution Service export * from './services/shellExecutionService.js'; // Export base tool definitions export * from './tools/tools.js'; +export * from './tools/tool-error.js'; export * from './tools/tool-registry.js'; // Export prompt logic diff --git a/packages/core/src/mcp/oauth-provider.test.ts b/packages/core/src/mcp/oauth-provider.test.ts index 20dc9fab..5bfd637b 100644 --- a/packages/core/src/mcp/oauth-provider.test.ts +++ b/packages/core/src/mcp/oauth-provider.test.ts @@ -7,7 +7,6 @@ import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest'; import * as http from 'node:http'; import * as crypto from 'node:crypto'; -import open from 'open'; import { MCPOAuthProvider, MCPOAuthConfig, @@ -17,7 +16,10 @@ import { import { MCPOAuthTokenStorage, MCPOAuthToken } from './oauth-token-storage.js'; // Mock dependencies -vi.mock('open'); +const mockOpenBrowserSecurely = vi.hoisted(() => vi.fn()); +vi.mock('../utils/secure-browser-launcher.js', () => ({ + openBrowserSecurely: mockOpenBrowserSecurely, +})); vi.mock('node:crypto'); vi.mock('./oauth-token-storage.js'); @@ -64,6 +66,7 @@ describe('MCPOAuthProvider', () => { beforeEach(() => { vi.clearAllMocks(); + mockOpenBrowserSecurely.mockClear(); vi.spyOn(console, 'log').mockImplementation(() => {}); vi.spyOn(console, 'warn').mockImplementation(() => {}); vi.spyOn(console, 'error').mockImplementation(() => {}); @@ -145,7 +148,9 @@ describe('MCPOAuthProvider', () => { expiresAt: expect.any(Number), }); - expect(open).toHaveBeenCalledWith(expect.stringContaining('authorize')); + expect(mockOpenBrowserSecurely).toHaveBeenCalledWith( + expect.stringContaining('authorize'), + ); expect(MCPOAuthTokenStorage.saveToken).toHaveBeenCalledWith( 'test-server', expect.objectContaining({ accessToken: 'access_token_123' }), @@ -672,13 +677,10 @@ describe('MCPOAuthProvider', () => { describe('Authorization URL building', () => { it('should build correct authorization URL with all parameters', async () => { // Mock to capture the URL that would be opened - let capturedUrl: string; - vi.mocked(open).mockImplementation((url) => { + let capturedUrl: string | undefined; + mockOpenBrowserSecurely.mockImplementation((url: string) => { capturedUrl = url; - // Return a minimal mock ChildProcess - return Promise.resolve({ - pid: 1234, - } as unknown as import('child_process').ChildProcess); + return Promise.resolve(); }); let callbackHandler: unknown; @@ -711,6 +713,7 @@ describe('MCPOAuthProvider', () => { await MCPOAuthProvider.authenticate('test-server', mockConfig); + expect(capturedUrl).toBeDefined(); expect(capturedUrl!).toContain('response_type=code'); expect(capturedUrl!).toContain('client_id=test-client-id'); expect(capturedUrl!).toContain('code_challenge=code_challenge_mock'); diff --git a/packages/core/src/mcp/oauth-provider.ts b/packages/core/src/mcp/oauth-provider.ts index 03401a4c..ff21d6d7 100644 --- a/packages/core/src/mcp/oauth-provider.ts +++ b/packages/core/src/mcp/oauth-provider.ts @@ -7,7 +7,7 @@ import * as http from 'node:http'; import * as crypto from 'node:crypto'; import { URL } from 'node:url'; -import open from 'open'; +import { openBrowserSecurely } from '../utils/secure-browser-launcher.js'; import { MCPOAuthToken, MCPOAuthTokenStorage } from './oauth-token-storage.js'; import { getErrorMessage } from '../utils/errors.js'; import { OAuthUtils } from './oauth-utils.js'; @@ -593,9 +593,9 @@ export class MCPOAuthProvider { // Start callback server const callbackPromise = this.startCallbackServer(pkceParams.state); - // Open browser + // Open browser securely try { - await open(authUrl); + await openBrowserSecurely(authUrl); } catch (error) { console.warn( 'Failed to open browser automatically:', diff --git a/packages/core/src/mcp/oauth-utils.test.ts b/packages/core/src/mcp/oauth-utils.test.ts index b27d97b3..12871ff2 100644 --- a/packages/core/src/mcp/oauth-utils.test.ts +++ b/packages/core/src/mcp/oauth-utils.test.ts @@ -140,7 +140,7 @@ describe('OAuthUtils', () => { describe('parseWWWAuthenticateHeader', () => { it('should parse resource metadata URI from WWW-Authenticate header', () => { const header = - 'Bearer realm="example", resource_metadata_uri="https://example.com/.well-known/oauth-protected-resource"'; + 'Bearer realm="example", resource_metadata="https://example.com/.well-known/oauth-protected-resource"'; const result = OAuthUtils.parseWWWAuthenticateHeader(header); expect(result).toBe( 'https://example.com/.well-known/oauth-protected-resource', diff --git a/packages/core/src/mcp/oauth-utils.ts b/packages/core/src/mcp/oauth-utils.ts index 6dad17c8..64fd68be 100644 --- a/packages/core/src/mcp/oauth-utils.ts +++ b/packages/core/src/mcp/oauth-utils.ts @@ -198,8 +198,8 @@ export class OAuthUtils { * @returns The resource metadata URI if found */ static parseWWWAuthenticateHeader(header: string): string | null { - // Parse Bearer realm and resource_metadata_uri - const match = header.match(/resource_metadata_uri="([^"]+)"/); + // Parse Bearer realm and resource_metadata + const match = header.match(/resource_metadata="([^"]+)"/); if (match) { return match[1]; } diff --git a/packages/core/src/services/loopDetectionService.test.ts b/packages/core/src/services/loopDetectionService.test.ts index 9f5d63a7..2ec32ae7 100644 --- a/packages/core/src/services/loopDetectionService.test.ts +++ b/packages/core/src/services/loopDetectionService.test.ts @@ -56,6 +56,15 @@ describe('LoopDetectionService', () => { value: content, }); + const createRepetitiveContent = (id: number, length: number): string => { + const baseString = `This is a unique sentence, id=${id}. `; + let content = ''; + while (content.length < length) { + content += baseString; + } + return content.slice(0, length); + }; + describe('Tool Call Loop Detection', () => { it(`should not detect a loop for fewer than TOOL_CALL_LOOP_THRESHOLD identical calls`, () => { const event = createToolCallRequestEvent('testTool', { param: 'value' }); @@ -149,13 +158,11 @@ describe('LoopDetectionService', () => { it('should detect a loop when a chunk of content repeats consecutively', () => { service.reset(''); - const repeatedContent = 'a'.repeat(CONTENT_CHUNK_SIZE); + const repeatedContent = createRepetitiveContent(1, CONTENT_CHUNK_SIZE); let isLoop = false; for (let i = 0; i < CONTENT_LOOP_THRESHOLD; i++) { - for (const char of repeatedContent) { - isLoop = service.addAndCheck(createContentEvent(char)); - } + isLoop = service.addAndCheck(createContentEvent(repeatedContent)); } expect(isLoop).toBe(true); expect(loggers.logLoopDetected).toHaveBeenCalledTimes(1); @@ -163,23 +170,119 @@ describe('LoopDetectionService', () => { it('should not detect a loop if repetitions are very far apart', () => { service.reset(''); - const repeatedContent = 'b'.repeat(CONTENT_CHUNK_SIZE); + const repeatedContent = createRepetitiveContent(1, CONTENT_CHUNK_SIZE); const fillerContent = generateRandomString(500); let isLoop = false; for (let i = 0; i < CONTENT_LOOP_THRESHOLD; i++) { - for (const char of repeatedContent) { - isLoop = service.addAndCheck(createContentEvent(char)); - } - for (const char of fillerContent) { - isLoop = service.addAndCheck(createContentEvent(char)); - } + isLoop = service.addAndCheck(createContentEvent(repeatedContent)); + isLoop = service.addAndCheck(createContentEvent(fillerContent)); } expect(isLoop).toBe(false); expect(loggers.logLoopDetected).not.toHaveBeenCalled(); }); }); + describe('Content Loop Detection with Code Blocks', () => { + it('should not detect a loop when repetitive content is inside a code block', () => { + service.reset(''); + const repeatedContent = createRepetitiveContent(1, CONTENT_CHUNK_SIZE); + + service.addAndCheck(createContentEvent('```\n')); + + for (let i = 0; i < CONTENT_LOOP_THRESHOLD; i++) { + const isLoop = service.addAndCheck(createContentEvent(repeatedContent)); + expect(isLoop).toBe(false); + } + + const isLoop = service.addAndCheck(createContentEvent('\n```')); + expect(isLoop).toBe(false); + expect(loggers.logLoopDetected).not.toHaveBeenCalled(); + }); + + it('should detect a loop when repetitive content is outside a code block', () => { + service.reset(''); + const repeatedContent = createRepetitiveContent(1, CONTENT_CHUNK_SIZE); + + service.addAndCheck(createContentEvent('```')); + service.addAndCheck(createContentEvent('\nsome code\n')); + service.addAndCheck(createContentEvent('```')); + + let isLoop = false; + for (let i = 0; i < CONTENT_LOOP_THRESHOLD; i++) { + isLoop = service.addAndCheck(createContentEvent(repeatedContent)); + } + expect(isLoop).toBe(true); + expect(loggers.logLoopDetected).toHaveBeenCalledTimes(1); + }); + + it('should handle content with multiple code blocks and no loops', () => { + service.reset(''); + service.addAndCheck(createContentEvent('```\ncode1\n```')); + service.addAndCheck(createContentEvent('\nsome text\n')); + const isLoop = service.addAndCheck(createContentEvent('```\ncode2\n```')); + + expect(isLoop).toBe(false); + expect(loggers.logLoopDetected).not.toHaveBeenCalled(); + }); + + it('should handle content with mixed code blocks and looping text', () => { + service.reset(''); + const repeatedContent = createRepetitiveContent(1, CONTENT_CHUNK_SIZE); + + service.addAndCheck(createContentEvent('```')); + service.addAndCheck(createContentEvent('\ncode1\n')); + service.addAndCheck(createContentEvent('```')); + + let isLoop = false; + for (let i = 0; i < CONTENT_LOOP_THRESHOLD; i++) { + isLoop = service.addAndCheck(createContentEvent(repeatedContent)); + } + + expect(isLoop).toBe(true); + expect(loggers.logLoopDetected).toHaveBeenCalledTimes(1); + }); + + it('should not detect a loop for a long code block with some repeating tokens', () => { + service.reset(''); + const repeatingTokens = + 'for (let i = 0; i < 10; i++) { console.log(i); }'; + + service.addAndCheck(createContentEvent('```\n')); + + for (let i = 0; i < 20; i++) { + const isLoop = service.addAndCheck(createContentEvent(repeatingTokens)); + expect(isLoop).toBe(false); + } + + const isLoop = service.addAndCheck(createContentEvent('\n```')); + expect(isLoop).toBe(false); + expect(loggers.logLoopDetected).not.toHaveBeenCalled(); + }); + + it('should reset tracking when a code fence is found', () => { + service.reset(''); + const repeatedContent = createRepetitiveContent(1, CONTENT_CHUNK_SIZE); + + for (let i = 0; i < CONTENT_LOOP_THRESHOLD - 1; i++) { + service.addAndCheck(createContentEvent(repeatedContent)); + } + + // This should not trigger a loop because of the reset + service.addAndCheck(createContentEvent('```')); + + // We are now in a code block, so loop detection should be off. + // Let's add the repeated content again, it should not trigger a loop. + let isLoop = false; + for (let i = 0; i < CONTENT_LOOP_THRESHOLD; i++) { + isLoop = service.addAndCheck(createContentEvent(repeatedContent)); + expect(isLoop).toBe(false); + } + + expect(loggers.logLoopDetected).not.toHaveBeenCalled(); + }); + }); + describe('Edge Cases', () => { it('should handle empty content', () => { const event = createContentEvent(''); diff --git a/packages/core/src/services/loopDetectionService.ts b/packages/core/src/services/loopDetectionService.ts index 7b3da20b..f71b8434 100644 --- a/packages/core/src/services/loopDetectionService.ts +++ b/packages/core/src/services/loopDetectionService.ts @@ -61,6 +61,7 @@ export class LoopDetectionService { private contentStats = new Map(); private lastContentIndex = 0; private loopDetected = false; + private inCodeBlock = false; // LLM loop track tracking private turnsInCurrentPrompt = 0; @@ -156,8 +157,27 @@ export class LoopDetectionService { * 2. Truncating history if it exceeds the maximum length * 3. Analyzing content chunks for repetitive patterns using hashing * 4. Detecting loops when identical chunks appear frequently within a short distance + * 5. Disabling loop detection within code blocks to prevent false positives, + * as repetitive code structures are common and not necessarily loops. */ private checkContentLoop(content: string): boolean { + // Code blocks can often contain repetitive syntax that is not indicative of a loop. + // To avoid false positives, we detect when we are inside a code block and + // temporarily disable loop detection. + const numFences = (content.match(/```/g) ?? []).length; + if (numFences) { + // Reset tracking when a code fence is detected to avoid analyzing content + // that spans across code block boundaries. + this.resetContentTracking(); + } + + const wasInCodeBlock = this.inCodeBlock; + this.inCodeBlock = + numFences % 2 === 0 ? this.inCodeBlock : !this.inCodeBlock; + if (wasInCodeBlock) { + return false; + } + this.streamContentHistory += content; this.truncateAndUpdate(); diff --git a/packages/core/src/services/shellExecutionService.test.ts b/packages/core/src/services/shellExecutionService.test.ts index 4d1655a2..cfce08d2 100644 --- a/packages/core/src/services/shellExecutionService.test.ts +++ b/packages/core/src/services/shellExecutionService.test.ts @@ -91,9 +91,9 @@ describe('ShellExecutionService', () => { }); expect(mockSpawn).toHaveBeenCalledWith( - 'bash', - ['-c', 'ls -l'], - expect.any(Object), + 'ls -l', + [], + expect.objectContaining({ shell: 'bash' }), ); expect(result.exitCode).toBe(0); expect(result.signal).toBeNull(); @@ -334,23 +334,31 @@ describe('ShellExecutionService', () => { describe('Platform-Specific Behavior', () => { it('should use cmd.exe on Windows', async () => { mockPlatform.mockReturnValue('win32'); - await simulateExecution('dir', (cp) => cp.emit('exit', 0, null)); + await simulateExecution('dir "foo bar"', (cp) => + cp.emit('exit', 0, null), + ); expect(mockSpawn).toHaveBeenCalledWith( - 'cmd.exe', - ['/c', 'dir'], - expect.objectContaining({ detached: false }), + 'dir "foo bar"', + [], + expect.objectContaining({ + shell: true, + detached: false, + }), ); }); it('should use bash and detached process group on Linux', async () => { mockPlatform.mockReturnValue('linux'); - await simulateExecution('ls', (cp) => cp.emit('exit', 0, null)); + await simulateExecution('ls "foo bar"', (cp) => cp.emit('exit', 0, null)); expect(mockSpawn).toHaveBeenCalledWith( - 'bash', - ['-c', 'ls'], - expect.objectContaining({ detached: true }), + 'ls "foo bar"', + [], + expect.objectContaining({ + shell: 'bash', + detached: true, + }), ); }); }); diff --git a/packages/core/src/services/shellExecutionService.ts b/packages/core/src/services/shellExecutionService.ts index 0f0002cd..d1126a7d 100644 --- a/packages/core/src/services/shellExecutionService.ts +++ b/packages/core/src/services/shellExecutionService.ts @@ -89,13 +89,16 @@ export class ShellExecutionService { abortSignal: AbortSignal, ): ShellExecutionHandle { const isWindows = os.platform() === 'win32'; - const shell = isWindows ? 'cmd.exe' : 'bash'; - const shellArgs = [isWindows ? '/c' : '-c', commandToExecute]; - const child = spawn(shell, shellArgs, { + const child = spawn(commandToExecute, [], { cwd, stdio: ['ignore', 'pipe', 'pipe'], - detached: !isWindows, // Use process groups on non-Windows for robust killing + // Use bash unless in Windows (since it doesn't support bash). + // For windows, just use the default. + shell: isWindows ? true : 'bash', + // Use process groups on non-Windows for robust killing. + // Windows process termination is handled by `taskkill /t`. + detached: !isWindows, env: { ...process.env, GEMINI_CLI: '1', diff --git a/packages/core/src/telemetry/clearcut-logger/clearcut-logger.ts b/packages/core/src/telemetry/clearcut-logger/clearcut-logger.ts index d36a16b5..6b85a664 100644 --- a/packages/core/src/telemetry/clearcut-logger/clearcut-logger.ts +++ b/packages/core/src/telemetry/clearcut-logger/clearcut-logger.ts @@ -18,16 +18,19 @@ import { ApiErrorEvent, FlashFallbackEvent, LoopDetectedEvent, - FlashDecidedToContinueEvent, + NextSpeakerCheckEvent, + SlashCommandEvent, + MalformedJsonResponseEvent, } from '../types.js'; import { EventMetadataKey } from './event-metadata-key.js'; import { Config } from '../../config/config.js'; -import { getInstallationId } from '../../utils/user_id.js'; +import { safeJsonStringify } from '../../utils/safeJsonStringify.js'; import { getCachedGoogleAccount, getLifetimeGoogleAccounts, } from '../../utils/user_account.js'; -import { safeJsonStringify } from '../../utils/safeJsonStringify.js'; +import { HttpError, retryWithBackoff } from '../../utils/retry.js'; +import { getInstallationId } from '../../utils/user_id.js'; const start_session_event_name = 'start_session'; const new_prompt_event_name = 'new_prompt'; @@ -38,7 +41,9 @@ const api_error_event_name = 'api_error'; const end_session_event_name = 'end_session'; const flash_fallback_event_name = 'flash_fallback'; const loop_detected_event_name = 'loop_detected'; -const flash_decided_to_continue_event_name = 'flash_decided_to_continue'; +const next_speaker_check_event_name = 'next_speaker_check'; +const slash_command_event_name = 'slash_command'; +const malformed_json_response_event_name = 'malformed_json_response'; export interface LogResponse { nextRequestWaitMs?: number; @@ -113,66 +118,81 @@ export class ClearcutLogger { }); } - flushToClearcut(): Promise { + async flushToClearcut(): Promise { if (this.config?.getDebugMode()) { console.log('Flushing log events to Clearcut.'); } const eventsToSend = [...this.events]; - this.events.length = 0; + if (eventsToSend.length === 0) { + return {}; + } - return new Promise((resolve, reject) => { - const request = [ - { - log_source_name: 'CONCORD', - request_time_ms: Date.now(), - log_event: eventsToSend, - }, - ]; - const body = safeJsonStringify(request); - const options = { - hostname: 'play.googleapis.com', - path: '/log', - method: 'POST', - headers: { 'Content-Length': Buffer.byteLength(body) }, - }; - const bufs: Buffer[] = []; - const req = https.request( - { - ...options, - agent: this.getProxyAgent(), - }, - (res) => { - res.on('data', (buf) => bufs.push(buf)); - res.on('end', () => { - resolve(Buffer.concat(bufs)); - }); - }, - ); - req.on('error', (e) => { - if (this.config?.getDebugMode()) { - console.log('Clearcut POST request error: ', e); - } - // Add the events back to the front of the queue to be retried. - this.events.unshift(...eventsToSend); - reject(e); + const flushFn = () => + new Promise((resolve, reject) => { + const request = [ + { + log_source_name: 'CONCORD', + request_time_ms: Date.now(), + log_event: eventsToSend, + }, + ]; + const body = safeJsonStringify(request); + const options = { + hostname: 'play.googleapis.com', + path: '/log', + method: 'POST', + headers: { 'Content-Length': Buffer.byteLength(body) }, + }; + const bufs: Buffer[] = []; + const req = https.request( + { + ...options, + agent: this.getProxyAgent(), + }, + (res) => { + if ( + res.statusCode && + (res.statusCode < 200 || res.statusCode >= 300) + ) { + const err: HttpError = new Error( + `Request failed with status ${res.statusCode}`, + ); + err.status = res.statusCode; + res.resume(); + return reject(err); + } + res.on('data', (buf) => bufs.push(buf)); + res.on('end', () => resolve(Buffer.concat(bufs))); + }, + ); + req.on('error', reject); + req.end(body); }); - req.end(body); - }) - .then((buf: Buffer) => { - try { - this.last_flush_time = Date.now(); - return this.decodeLogResponse(buf) || {}; - } catch (error: unknown) { - console.error('Error flushing log events:', error); - return {}; - } - }) - .catch((error: unknown) => { - // Handle all errors to prevent unhandled promise rejections - console.error('Error flushing log events:', error); - // Return empty response to maintain the Promise contract - return {}; + + try { + const responseBuffer = await retryWithBackoff(flushFn, { + maxAttempts: 3, + initialDelayMs: 200, + shouldRetry: (err: unknown) => { + if (!(err instanceof Error)) return false; + const status = (err as HttpError).status as number | undefined; + // If status is not available, it's likely a network error + if (status === undefined) return true; + + // Retry on 429 (Too many Requests) and 5xx server errors. + return status === 429 || (status >= 500 && status < 600); + }, }); + + this.events.splice(0, eventsToSend.length); + this.last_flush_time = Date.now(); + return this.decodeLogResponse(responseBuffer) || {}; + } catch (error) { + if (this.config?.getDebugMode()) { + console.error('Clearcut flush failed after multiple retries.', error); + } + return {}; + } } // Visible for testing. Decodes protobuf-encoded response from Clearcut server. @@ -215,7 +235,11 @@ export class ClearcutLogger { } logStartSessionEvent(event: StartSessionEvent): void { - const surface = process.env.SURFACE || 'SURFACE_NOT_SET'; + const surface = + process.env.CLOUD_SHELL === 'true' + ? 'CLOUD_SHELL' + : process.env.SURFACE || 'SURFACE_NOT_SET'; + const data = [ { gemini_cli_key: EventMetadataKey.GEMINI_CLI_START_SESSION_MODEL, @@ -494,12 +518,20 @@ export class ClearcutLogger { this.flushIfNeeded(); } - logFlashDecidedToContinueEvent(event: FlashDecidedToContinueEvent): void { + logNextSpeakerCheck(event: NextSpeakerCheckEvent): void { const data = [ { gemini_cli_key: EventMetadataKey.GEMINI_CLI_PROMPT_ID, value: JSON.stringify(event.prompt_id), }, + { + gemini_cli_key: EventMetadataKey.GEMINI_CLI_RESPONSE_FINISH_REASON, + value: JSON.stringify(event.finish_reason), + }, + { + gemini_cli_key: EventMetadataKey.GEMINI_CLI_NEXT_SPEAKER_CHECK_RESULT, + value: JSON.stringify(event.result), + }, { gemini_cli_key: EventMetadataKey.GEMINI_CLI_SESSION_ID, value: this.config?.getSessionId() ?? '', @@ -507,7 +539,41 @@ export class ClearcutLogger { ]; this.enqueueLogEvent( - this.createLogEvent(flash_decided_to_continue_event_name, data), + this.createLogEvent(next_speaker_check_event_name, data), + ); + this.flushIfNeeded(); + } + + logSlashCommandEvent(event: SlashCommandEvent): void { + const data = [ + { + gemini_cli_key: EventMetadataKey.GEMINI_CLI_SLASH_COMMAND_NAME, + value: JSON.stringify(event.command), + }, + ]; + + if (event.subcommand) { + data.push({ + gemini_cli_key: EventMetadataKey.GEMINI_CLI_SLASH_COMMAND_SUBCOMMAND, + value: JSON.stringify(event.subcommand), + }); + } + + this.enqueueLogEvent(this.createLogEvent(slash_command_event_name, data)); + this.flushIfNeeded(); + } + + logMalformedJsonResponseEvent(event: MalformedJsonResponseEvent): void { + const data = [ + { + gemini_cli_key: + EventMetadataKey.GEMINI_CLI_MALFORMED_JSON_RESPONSE_MODEL, + value: JSON.stringify(event.model), + }, + ]; + + this.enqueueLogEvent( + this.createLogEvent(malformed_json_response_event_name, data), ); this.flushIfNeeded(); } diff --git a/packages/core/src/telemetry/clearcut-logger/event-metadata-key.ts b/packages/core/src/telemetry/clearcut-logger/event-metadata-key.ts index b34cc6ea..0fc35894 100644 --- a/packages/core/src/telemetry/clearcut-logger/event-metadata-key.ts +++ b/packages/core/src/telemetry/clearcut-logger/event-metadata-key.ts @@ -163,6 +163,33 @@ export enum EventMetadataKey { // Logs the type of loop detected. GEMINI_CLI_LOOP_DETECTED_TYPE = 38, + + // ========================================================================== + // Slash Command Event Keys + // =========================================================================== + + // Logs the name of the slash command. + GEMINI_CLI_SLASH_COMMAND_NAME = 41, + + // Logs the subcommand of the slash command. + GEMINI_CLI_SLASH_COMMAND_SUBCOMMAND = 42, + + // ========================================================================== + // Next Speaker Check Event Keys + // =========================================================================== + + // Logs the finish reason of the previous streamGenerateContent response + GEMINI_CLI_RESPONSE_FINISH_REASON = 43, + + // Logs the result of the next speaker check + GEMINI_CLI_NEXT_SPEAKER_CHECK_RESULT = 44, + + // ========================================================================== + // Malformed JSON Response Event Keys + // ========================================================================== + + // Logs the model that produced the malformed JSON response. + GEMINI_CLI_MALFORMED_JSON_RESPONSE_MODEL = 45, } export function getEventMetadataKey( diff --git a/packages/core/src/telemetry/constants.ts b/packages/core/src/telemetry/constants.ts index d6c8959d..bcd0cf26 100644 --- a/packages/core/src/telemetry/constants.ts +++ b/packages/core/src/telemetry/constants.ts @@ -13,8 +13,9 @@ export const EVENT_API_ERROR = 'qwen-code.api_error'; export const EVENT_API_RESPONSE = 'qwen-code.api_response'; export const EVENT_CLI_CONFIG = 'qwen-code.config'; export const EVENT_FLASH_FALLBACK = 'qwen-code.flash_fallback'; -export const EVENT_FLASH_DECIDED_TO_CONTINUE = - 'qwen-code.flash_decided_to_continue'; +export const EVENT_NEXT_SPEAKER_CHECK = 'qwen-code.next_speaker_check'; +export const EVENT_SLASH_COMMAND = 'qwen-code.slash_command'; + export const METRIC_TOOL_CALL_COUNT = 'qwen-code.tool.call.count'; export const METRIC_TOOL_CALL_LATENCY = 'qwen-code.tool.call.latency'; export const METRIC_API_REQUEST_COUNT = 'qwen-code.api.request.count'; diff --git a/packages/core/src/telemetry/index.ts b/packages/core/src/telemetry/index.ts index 5163084a..47dc4ff0 100644 --- a/packages/core/src/telemetry/index.ts +++ b/packages/core/src/telemetry/index.ts @@ -27,6 +27,7 @@ export { logApiError, logApiResponse, logFlashFallback, + logSlashCommand, } from './loggers.js'; export { StartSessionEvent, @@ -38,6 +39,7 @@ export { ApiResponseEvent, TelemetryEvent, FlashFallbackEvent, + SlashCommandEvent, } from './types.js'; export { SpanStatusCode, ValueType } from '@opentelemetry/api'; export { SemanticAttributes } from '@opentelemetry/semantic-conventions'; diff --git a/packages/core/src/telemetry/loggers.test.circular.ts b/packages/core/src/telemetry/loggers.test.circular.ts index 62a61bfd..80444a0d 100644 --- a/packages/core/src/telemetry/loggers.test.circular.ts +++ b/packages/core/src/telemetry/loggers.test.circular.ts @@ -53,6 +53,7 @@ describe('Circular Reference Handling', () => { responseParts: [{ text: 'test result' }], resultDisplay: undefined, error: undefined, // undefined means success + errorType: undefined, }; const mockCompletedToolCall: CompletedToolCall = { @@ -100,6 +101,7 @@ describe('Circular Reference Handling', () => { responseParts: [{ text: 'test result' }], resultDisplay: undefined, error: undefined, // undefined means success + errorType: undefined, }; const mockCompletedToolCall: CompletedToolCall = { diff --git a/packages/core/src/telemetry/loggers.test.ts b/packages/core/src/telemetry/loggers.test.ts index 7a24bcca..3d8116cc 100644 --- a/packages/core/src/telemetry/loggers.test.ts +++ b/packages/core/src/telemetry/loggers.test.ts @@ -12,6 +12,7 @@ import { ErroredToolCall, GeminiClient, ToolConfirmationOutcome, + ToolErrorType, ToolRegistry, } from '../index.js'; import { logs } from '@opentelemetry/api-logs'; @@ -448,6 +449,7 @@ describe('loggers', () => { responseParts: 'test-response', resultDisplay: undefined, error: undefined, + errorType: undefined, }, tool: new EditTool(mockConfig), durationMs: 100, @@ -511,6 +513,7 @@ describe('loggers', () => { responseParts: 'test-response', resultDisplay: undefined, error: undefined, + errorType: undefined, }, durationMs: 100, outcome: ToolConfirmationOutcome.Cancel, @@ -574,6 +577,7 @@ describe('loggers', () => { responseParts: 'test-response', resultDisplay: undefined, error: undefined, + errorType: undefined, }, outcome: ToolConfirmationOutcome.ModifyWithEditor, tool: new EditTool(mockConfig), @@ -638,6 +642,7 @@ describe('loggers', () => { responseParts: 'test-response', resultDisplay: undefined, error: undefined, + errorType: undefined, }, tool: new EditTool(mockConfig), durationMs: 100, @@ -703,6 +708,7 @@ describe('loggers', () => { name: 'test-error-type', message: 'test-error', }, + errorType: ToolErrorType.UNKNOWN, }, durationMs: 100, }; @@ -729,8 +735,8 @@ describe('loggers', () => { success: false, error: 'test-error', 'error.message': 'test-error', - error_type: 'test-error-type', - 'error.type': 'test-error-type', + error_type: ToolErrorType.UNKNOWN, + 'error.type': ToolErrorType.UNKNOWN, prompt_id: 'prompt-id-5', }, }); diff --git a/packages/core/src/telemetry/loggers.ts b/packages/core/src/telemetry/loggers.ts index 073124f4..2aa0d86a 100644 --- a/packages/core/src/telemetry/loggers.ts +++ b/packages/core/src/telemetry/loggers.ts @@ -15,8 +15,9 @@ import { EVENT_TOOL_CALL, EVENT_USER_PROMPT, EVENT_FLASH_FALLBACK, - EVENT_FLASH_DECIDED_TO_CONTINUE, + EVENT_NEXT_SPEAKER_CHECK, SERVICE_NAME, + EVENT_SLASH_COMMAND, } from './constants.js'; import { ApiErrorEvent, @@ -26,8 +27,9 @@ import { ToolCallEvent, UserPromptEvent, FlashFallbackEvent, - FlashDecidedToContinueEvent, + NextSpeakerCheckEvent, LoopDetectedEvent, + SlashCommandEvent, } from './types.js'; import { recordApiErrorMetrics, @@ -312,22 +314,43 @@ export function logLoopDetected( logger.emit(logRecord); } -export function logFlashDecidedToContinue( +export function logNextSpeakerCheck( config: Config, - event: FlashDecidedToContinueEvent, + event: NextSpeakerCheckEvent, ): void { - ClearcutLogger.getInstance(config)?.logFlashDecidedToContinueEvent(event); + ClearcutLogger.getInstance(config)?.logNextSpeakerCheck(event); if (!isTelemetrySdkInitialized()) return; const attributes: LogAttributes = { ...getCommonAttributes(config), ...event, - 'event.name': EVENT_FLASH_DECIDED_TO_CONTINUE, + 'event.name': EVENT_NEXT_SPEAKER_CHECK, }; const logger = logs.getLogger(SERVICE_NAME); const logRecord: LogRecord = { - body: `Flash decided to continue.`, + body: `Next speaker check.`, + attributes, + }; + logger.emit(logRecord); +} + +export function logSlashCommand( + config: Config, + event: SlashCommandEvent, +): void { + ClearcutLogger.getInstance(config)?.logSlashCommandEvent(event); + if (!isTelemetrySdkInitialized()) return; + + const attributes: LogAttributes = { + ...getCommonAttributes(config), + ...event, + 'event.name': EVENT_SLASH_COMMAND, + }; + + const logger = logs.getLogger(SERVICE_NAME); + const logRecord: LogRecord = { + body: `Slash command: ${event.command}.`, attributes, }; logger.emit(logRecord); diff --git a/packages/core/src/telemetry/telemetry.test.ts b/packages/core/src/telemetry/telemetry.test.ts index 9734e382..8ebb3d9a 100644 --- a/packages/core/src/telemetry/telemetry.test.ts +++ b/packages/core/src/telemetry/telemetry.test.ts @@ -12,6 +12,7 @@ import { } from './sdk.js'; import { Config } from '../config/config.js'; import { NodeSDK } from '@opentelemetry/sdk-node'; +import { IdeClient } from '../ide/ide-client.js'; vi.mock('@opentelemetry/sdk-node'); vi.mock('../config/config.js'); @@ -29,6 +30,7 @@ describe('telemetry', () => { targetDir: '/test/dir', debugMode: false, cwd: '/test/dir', + ideClient: IdeClient.getInstance(false), }); vi.spyOn(mockConfig, 'getTelemetryEnabled').mockReturnValue(true); vi.spyOn(mockConfig, 'getTelemetryOtlpEndpoint').mockReturnValue( diff --git a/packages/core/src/telemetry/types.ts b/packages/core/src/telemetry/types.ts index 69dffb08..9d1fd77a 100644 --- a/packages/core/src/telemetry/types.ts +++ b/packages/core/src/telemetry/types.ts @@ -137,7 +137,7 @@ export class ToolCallEvent { ? getDecisionFromOutcome(call.outcome) : undefined; this.error = call.response.error?.message; - this.error_type = call.response.error?.name; + this.error_type = call.response.errorType; this.prompt_id = call.request.prompt_id; } } @@ -266,15 +266,45 @@ export class LoopDetectedEvent { } } -export class FlashDecidedToContinueEvent { - 'event.name': 'flash_decided_to_continue'; +export class NextSpeakerCheckEvent { + 'event.name': 'next_speaker_check'; 'event.timestamp': string; // ISO 8601 prompt_id: string; + finish_reason: string; + result: string; - constructor(prompt_id: string) { - this['event.name'] = 'flash_decided_to_continue'; + constructor(prompt_id: string, finish_reason: string, result: string) { + this['event.name'] = 'next_speaker_check'; this['event.timestamp'] = new Date().toISOString(); this.prompt_id = prompt_id; + this.finish_reason = finish_reason; + this.result = result; + } +} + +export class SlashCommandEvent { + 'event.name': 'slash_command'; + 'event.timestamp': string; // ISO 8106 + command: string; + subcommand?: string; + + constructor(command: string, subcommand?: string) { + this['event.name'] = 'slash_command'; + this['event.timestamp'] = new Date().toISOString(); + this.command = command; + this.subcommand = subcommand; + } +} + +export class MalformedJsonResponseEvent { + 'event.name': 'malformed_json_response'; + 'event.timestamp': string; // ISO 8601 + model: string; + + constructor(model: string) { + this['event.name'] = 'malformed_json_response'; + this['event.timestamp'] = new Date().toISOString(); + this.model = model; } } @@ -288,4 +318,6 @@ export type TelemetryEvent = | ApiResponseEvent | FlashFallbackEvent | LoopDetectedEvent - | FlashDecidedToContinueEvent; + | NextSpeakerCheckEvent + | SlashCommandEvent + | MalformedJsonResponseEvent; diff --git a/packages/core/src/telemetry/uiTelemetry.test.ts b/packages/core/src/telemetry/uiTelemetry.test.ts index 38ba7a91..bce54ad8 100644 --- a/packages/core/src/telemetry/uiTelemetry.test.ts +++ b/packages/core/src/telemetry/uiTelemetry.test.ts @@ -22,6 +22,7 @@ import { ErroredToolCall, SuccessfulToolCall, } from '../core/coreToolScheduler.js'; +import { ToolErrorType } from '../tools/tool-error.js'; import { Tool, ToolConfirmationOutcome } from '../tools/tools.js'; const createFakeCompletedToolCall = ( @@ -54,6 +55,7 @@ const createFakeCompletedToolCall = ( }, }, error: undefined, + errorType: undefined, resultDisplay: 'Success!', }, durationMs: duration, @@ -73,6 +75,7 @@ const createFakeCompletedToolCall = ( }, }, error: error || new Error('Tool failed'), + errorType: ToolErrorType.UNKNOWN, resultDisplay: 'Failure!', }, durationMs: duration, diff --git a/packages/core/src/test-utils/mockWorkspaceContext.ts b/packages/core/src/test-utils/mockWorkspaceContext.ts new file mode 100644 index 00000000..61497b3e --- /dev/null +++ b/packages/core/src/test-utils/mockWorkspaceContext.ts @@ -0,0 +1,33 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import { vi } from 'vitest'; +import { WorkspaceContext } from '../utils/workspaceContext.js'; + +/** + * Creates a mock WorkspaceContext for testing + * @param rootDir The root directory to use for the mock + * @param additionalDirs Optional additional directories to include in the workspace + * @returns A mock WorkspaceContext instance + */ +export function createMockWorkspaceContext( + rootDir: string, + additionalDirs: string[] = [], +): WorkspaceContext { + const allDirs = [rootDir, ...additionalDirs]; + + const mockWorkspaceContext = { + addDirectory: vi.fn(), + getDirectories: vi.fn().mockReturnValue(allDirs), + isPathWithinWorkspace: vi + .fn() + .mockImplementation((path: string) => + allDirs.some((dir) => path.startsWith(dir)), + ), + } as unknown as WorkspaceContext; + + return mockWorkspaceContext; +} diff --git a/packages/core/src/tools/edit.test.ts b/packages/core/src/tools/edit.test.ts index 4ff33ff4..029d3a3c 100644 --- a/packages/core/src/tools/edit.test.ts +++ b/packages/core/src/tools/edit.test.ts @@ -27,11 +27,13 @@ vi.mock('../utils/editor.js', () => ({ import { describe, it, expect, beforeEach, afterEach, vi, Mock } from 'vitest'; import { EditTool, EditToolParams } from './edit.js'; import { FileDiff } from './tools.js'; +import { ToolErrorType } from './tool-error.js'; import path from 'path'; import fs from 'fs'; import os from 'os'; import { ApprovalMode, Config } from '../config/config.js'; import { Content, Part, SchemaUnion } from '@google/genai'; +import { createMockWorkspaceContext } from '../test-utils/mockWorkspaceContext.js'; describe('EditTool', () => { let tool: EditTool; @@ -41,6 +43,7 @@ describe('EditTool', () => { let geminiClient: any; beforeEach(() => { + vi.restoreAllMocks(); tempDir = fs.mkdtempSync(path.join(os.tmpdir(), 'edit-tool-test-')); rootDir = path.join(tempDir, 'root'); fs.mkdirSync(rootDir); @@ -54,6 +57,7 @@ describe('EditTool', () => { getTargetDir: () => rootDir, getApprovalMode: vi.fn(), setApprovalMode: vi.fn(), + getWorkspaceContext: () => createMockWorkspaceContext(rootDir), // getGeminiConfig: () => ({ apiKey: 'test-api-key' }), // This was not a real Config method // Add other properties/methods of Config if EditTool uses them // Minimal other methods to satisfy Config type if needed by EditTool constructor or other direct uses: @@ -215,8 +219,9 @@ describe('EditTool', () => { old_string: 'old', new_string: 'new', }; - expect(tool.validateToolParams(params)).toMatch( - /File path must be within the root directory/, + const error = tool.validateToolParams(params); + expect(error).toContain( + 'File path must be within one of the workspace directories', ); }); }); @@ -623,6 +628,98 @@ describe('EditTool', () => { }); }); + describe('Error Scenarios', () => { + const testFile = 'error_test.txt'; + let filePath: string; + + beforeEach(() => { + filePath = path.join(rootDir, testFile); + }); + + it('should return FILE_NOT_FOUND error', async () => { + const params: EditToolParams = { + file_path: filePath, + old_string: 'any', + new_string: 'new', + }; + const result = await tool.execute(params, new AbortController().signal); + expect(result.error?.type).toBe(ToolErrorType.FILE_NOT_FOUND); + }); + + it('should return ATTEMPT_TO_CREATE_EXISTING_FILE error', async () => { + fs.writeFileSync(filePath, 'existing content', 'utf8'); + const params: EditToolParams = { + file_path: filePath, + old_string: '', + new_string: 'new content', + }; + const result = await tool.execute(params, new AbortController().signal); + expect(result.error?.type).toBe( + ToolErrorType.ATTEMPT_TO_CREATE_EXISTING_FILE, + ); + }); + + it('should return NO_OCCURRENCE_FOUND error', async () => { + fs.writeFileSync(filePath, 'content', 'utf8'); + const params: EditToolParams = { + file_path: filePath, + old_string: 'not-found', + new_string: 'new', + }; + const result = await tool.execute(params, new AbortController().signal); + expect(result.error?.type).toBe(ToolErrorType.EDIT_NO_OCCURRENCE_FOUND); + }); + + it('should return EXPECTED_OCCURRENCE_MISMATCH error', async () => { + fs.writeFileSync(filePath, 'one one two', 'utf8'); + const params: EditToolParams = { + file_path: filePath, + old_string: 'one', + new_string: 'new', + expected_replacements: 3, + }; + const result = await tool.execute(params, new AbortController().signal); + expect(result.error?.type).toBe( + ToolErrorType.EDIT_EXPECTED_OCCURRENCE_MISMATCH, + ); + }); + + it('should return NO_CHANGE error', async () => { + fs.writeFileSync(filePath, 'content', 'utf8'); + const params: EditToolParams = { + file_path: filePath, + old_string: 'content', + new_string: 'content', + }; + const result = await tool.execute(params, new AbortController().signal); + expect(result.error?.type).toBe(ToolErrorType.EDIT_NO_CHANGE); + }); + + it('should return INVALID_PARAMETERS error for relative path', async () => { + const params: EditToolParams = { + file_path: 'relative/path.txt', + old_string: 'a', + new_string: 'b', + }; + const result = await tool.execute(params, new AbortController().signal); + expect(result.error?.type).toBe(ToolErrorType.INVALID_TOOL_PARAMS); + }); + + it('should return FILE_WRITE_FAILURE on write error', async () => { + fs.writeFileSync(filePath, 'content', 'utf8'); + // Make file readonly to trigger a write error + fs.chmodSync(filePath, '444'); + + const params: EditToolParams = { + file_path: filePath, + old_string: 'content', + new_string: 'new content', + }; + const result = await tool.execute(params, new AbortController().signal); + expect(result.error?.type).toBe(ToolErrorType.FILE_WRITE_FAILURE); + }); + }); + describe('getDescription', () => { it('should return "No file changes to..." if old_string and new_string are the same', () => { const testFileName = 'test.txt'; @@ -675,4 +772,28 @@ describe('EditTool', () => { ); }); }); + + describe('workspace boundary validation', () => { + it('should validate paths are within workspace root', () => { + const validPath = { + file_path: path.join(rootDir, 'file.txt'), + old_string: 'old', + new_string: 'new', + }; + expect(tool.validateToolParams(validPath)).toBeNull(); + }); + + it('should reject paths outside workspace root', () => { + const invalidPath = { + file_path: '/etc/passwd', + old_string: 'root', + new_string: 'hacked', + }; + const error = tool.validateToolParams(invalidPath); + expect(error).toContain( + 'File path must be within one of the workspace directories', + ); + expect(error).toContain(rootDir); + }); + }); }); diff --git a/packages/core/src/tools/edit.ts b/packages/core/src/tools/edit.ts index fd936611..25da2292 100644 --- a/packages/core/src/tools/edit.ts +++ b/packages/core/src/tools/edit.ts @@ -17,6 +17,7 @@ import { ToolResult, ToolResultDisplay, } from './tools.js'; +import { ToolErrorType } from './tool-error.js'; import { Type } from '@google/genai'; import { SchemaValidator } from '../utils/schemaValidator.js'; import { makeRelative, shortenPath } from '../utils/paths.js'; @@ -26,7 +27,6 @@ import { ensureCorrectEdit } from '../utils/editCorrector.js'; import { DEFAULT_DIFF_OPTIONS } from './diffOptions.js'; import { ReadFileTool } from './read-file.js'; import { ModifiableTool, ModifyContext } from './modifiable-tool.js'; -import { isWithinRoot } from '../utils/fileUtils.js'; /** * Parameters for the Edit tool @@ -63,7 +63,7 @@ interface CalculatedEdit { currentContent: string | null; newContent: string; occurrences: number; - error?: { display: string; raw: string }; + error?: { display: string; raw: string; type: ToolErrorType }; isNewFile: boolean; } @@ -137,8 +137,10 @@ Expectation for required parameters: return `File path must be absolute: ${params.file_path}`; } - if (!isWithinRoot(params.file_path, this.config.getTargetDir())) { - return `File path must be within the root directory (${this.config.getTargetDir()}): ${params.file_path}`; + const workspaceContext = this.config.getWorkspaceContext(); + if (!workspaceContext.isPathWithinWorkspace(params.file_path)) { + const directories = workspaceContext.getDirectories(); + return `File path must be within one of the workspace directories: ${directories.join(', ')}`; } return null; @@ -190,7 +192,9 @@ Expectation for required parameters: let finalNewString = params.new_string; let finalOldString = params.old_string; let occurrences = 0; - let error: { display: string; raw: string } | undefined = undefined; + let error: + | { display: string; raw: string; type: ToolErrorType } + | undefined = undefined; try { currentContent = fs.readFileSync(params.file_path, 'utf8'); @@ -213,6 +217,7 @@ Expectation for required parameters: error = { display: `File not found. Cannot apply edit. Use an empty old_string to create a new file.`, raw: `File not found: ${params.file_path}`, + type: ToolErrorType.FILE_NOT_FOUND, }; } else if (currentContent !== null) { // Editing an existing file @@ -232,11 +237,13 @@ Expectation for required parameters: error = { display: `Failed to edit. Attempted to create a file that already exists.`, raw: `File already exists, cannot create: ${params.file_path}`, + type: ToolErrorType.ATTEMPT_TO_CREATE_EXISTING_FILE, }; } else if (occurrences === 0) { error = { display: `Failed to edit, could not find the string to replace.`, raw: `Failed to edit, 0 occurrences found for old_string in ${params.file_path}. No edits made. The exact text in old_string was not found. Ensure you're not escaping content incorrectly and check whitespace, indentation, and context. Use ${ReadFileTool.Name} tool to verify.`, + type: ToolErrorType.EDIT_NO_OCCURRENCE_FOUND, }; } else if (occurrences !== expectedReplacements) { const occurrenceTerm = @@ -245,11 +252,13 @@ Expectation for required parameters: error = { display: `Failed to edit, expected ${expectedReplacements} ${occurrenceTerm} but found ${occurrences}.`, raw: `Failed to edit, Expected ${expectedReplacements} ${occurrenceTerm} but found ${occurrences} for old_string in file: ${params.file_path}`, + type: ToolErrorType.EDIT_EXPECTED_OCCURRENCE_MISMATCH, }; } else if (finalOldString === finalNewString) { error = { display: `No changes to apply. The old_string and new_string are identical.`, raw: `No changes to apply. The old_string and new_string are identical in file: ${params.file_path}`, + type: ToolErrorType.EDIT_NO_CHANGE, }; } } else { @@ -257,6 +266,7 @@ Expectation for required parameters: error = { display: `Failed to read content of file.`, raw: `Failed to read content of existing file: ${params.file_path}`, + type: ToolErrorType.READ_CONTENT_FAILURE, }; } @@ -373,6 +383,10 @@ Expectation for required parameters: return { llmContent: `Error: Invalid parameters provided. Reason: ${validationError}`, returnDisplay: `Error: ${validationError}`, + error: { + message: validationError, + type: ToolErrorType.INVALID_TOOL_PARAMS, + }, }; } @@ -384,6 +398,10 @@ Expectation for required parameters: return { llmContent: `Error preparing edit: ${errorMsg}`, returnDisplay: `Error preparing edit: ${errorMsg}`, + error: { + message: errorMsg, + type: ToolErrorType.EDIT_PREPARATION_FAILURE, + }, }; } @@ -391,6 +409,10 @@ Expectation for required parameters: return { llmContent: editData.error.raw, returnDisplay: `Error: ${editData.error.display}`, + error: { + message: editData.error.raw, + type: editData.error.type, + }, }; } @@ -441,6 +463,10 @@ Expectation for required parameters: return { llmContent: `Error executing edit: ${errorMsg}`, returnDisplay: `Error writing file: ${errorMsg}`, + error: { + message: errorMsg, + type: ToolErrorType.FILE_WRITE_FAILURE, + }, }; } } diff --git a/packages/core/src/tools/glob.test.ts b/packages/core/src/tools/glob.test.ts index 51effe4e..0ee6c0ee 100644 --- a/packages/core/src/tools/glob.test.ts +++ b/packages/core/src/tools/glob.test.ts @@ -9,9 +9,10 @@ import { partListUnionToString } from '../core/geminiRequest.js'; import path from 'path'; import fs from 'fs/promises'; import os from 'os'; -import { describe, it, expect, beforeEach, afterEach } from 'vitest'; // Removed vi +import { describe, it, expect, beforeEach, afterEach } from 'vitest'; import { FileDiscoveryService } from '../services/fileDiscoveryService.js'; import { Config } from '../config/config.js'; +import { createMockWorkspaceContext } from '../test-utils/mockWorkspaceContext.js'; describe('GlobTool', () => { let tempRootDir: string; // This will be the rootDirectory for the GlobTool instance @@ -23,6 +24,7 @@ describe('GlobTool', () => { getFileService: () => new FileDiscoveryService(tempRootDir), getFileFilteringRespectGitIgnore: () => true, getTargetDir: () => tempRootDir, + getWorkspaceContext: () => createMockWorkspaceContext(tempRootDir), } as unknown as Config; beforeEach(async () => { @@ -243,7 +245,7 @@ describe('GlobTool', () => { path: '../../../../../../../../../../tmp', }; // Definitely outside expect(specificGlobTool.validateToolParams(paramsOutside)).toContain( - "resolves outside the tool's root directory", + 'resolves outside the allowed workspace directories', ); }); @@ -264,6 +266,37 @@ describe('GlobTool', () => { ); }); }); + + describe('workspace boundary validation', () => { + it('should validate search paths are within workspace boundaries', () => { + const validPath = { pattern: '*.ts', path: 'sub' }; + const invalidPath = { pattern: '*.ts', path: '../..' }; + + expect(globTool.validateToolParams(validPath)).toBeNull(); + expect(globTool.validateToolParams(invalidPath)).toContain( + 'resolves outside the allowed workspace directories', + ); + }); + + it('should provide clear error messages when path is outside workspace', () => { + const invalidPath = { pattern: '*.ts', path: '/etc' }; + const error = globTool.validateToolParams(invalidPath); + + expect(error).toContain( + 'resolves outside the allowed workspace directories', + ); + expect(error).toContain(tempRootDir); + }); + + it('should work with paths in workspace subdirectories', async () => { + const params: GlobToolParams = { pattern: '*.md', path: 'sub' }; + const result = await globTool.execute(params, abortSignal); + + expect(result.llmContent).toContain('Found 2 file(s)'); + expect(result.llmContent).toContain('fileC.md'); + expect(result.llmContent).toContain('FileD.MD'); + }); + }); }); describe('sortFileEntries', () => { diff --git a/packages/core/src/tools/glob.ts b/packages/core/src/tools/glob.ts index 2e829e4c..5bcb9778 100644 --- a/packages/core/src/tools/glob.ts +++ b/packages/core/src/tools/glob.ts @@ -11,7 +11,6 @@ import { SchemaValidator } from '../utils/schemaValidator.js'; import { BaseTool, Icon, ToolResult } from './tools.js'; import { Type } from '@google/genai'; import { shortenPath, makeRelative } from '../utils/paths.js'; -import { isWithinRoot } from '../utils/fileUtils.js'; import { Config } from '../config/config.js'; // Subset of 'Path' interface provided by 'glob' that we can implement for testing @@ -130,8 +129,10 @@ export class GlobTool extends BaseTool { params.path || '.', ); - if (!isWithinRoot(searchDirAbsolute, this.config.getTargetDir())) { - return `Search path ("${searchDirAbsolute}") resolves outside the tool's root directory ("${this.config.getTargetDir()}").`; + const workspaceContext = this.config.getWorkspaceContext(); + if (!workspaceContext.isPathWithinWorkspace(searchDirAbsolute)) { + const directories = workspaceContext.getDirectories(); + return `Search path ("${searchDirAbsolute}") resolves outside the allowed workspace directories: ${directories.join(', ')}`; } const targetDir = searchDirAbsolute || this.config.getTargetDir(); @@ -189,10 +190,27 @@ export class GlobTool extends BaseTool { } try { - const searchDirAbsolute = path.resolve( - this.config.getTargetDir(), - params.path || '.', - ); + const workspaceContext = this.config.getWorkspaceContext(); + const workspaceDirectories = workspaceContext.getDirectories(); + + // If a specific path is provided, resolve it and check if it's within workspace + let searchDirectories: readonly string[]; + if (params.path) { + const searchDirAbsolute = path.resolve( + this.config.getTargetDir(), + params.path, + ); + if (!workspaceContext.isPathWithinWorkspace(searchDirAbsolute)) { + return { + llmContent: `Error: Path "${params.path}" is not within any workspace directory`, + returnDisplay: `Path is not within workspace`, + }; + } + searchDirectories = [searchDirAbsolute]; + } else { + // Search across all workspace directories + searchDirectories = workspaceDirectories; + } // Get centralized file discovery service const respectGitIgnore = @@ -200,17 +218,26 @@ export class GlobTool extends BaseTool { this.config.getFileFilteringRespectGitIgnore(); const fileDiscovery = this.config.getFileService(); - const entries = await glob(params.pattern, { - cwd: searchDirAbsolute, - withFileTypes: true, - nodir: true, - stat: true, - nocase: !params.case_sensitive, - dot: true, - ignore: ['**/node_modules/**', '**/.git/**'], - follow: false, - signal, - }); + // Collect entries from all search directories + let allEntries: GlobPath[] = []; + + for (const searchDir of searchDirectories) { + const entries = (await glob(params.pattern, { + cwd: searchDir, + withFileTypes: true, + nodir: true, + stat: true, + nocase: !params.case_sensitive, + dot: true, + ignore: ['**/node_modules/**', '**/.git/**'], + follow: false, + signal, + })) as GlobPath[]; + + allEntries = allEntries.concat(entries); + } + + const entries = allEntries; // Apply git-aware filtering if enabled and in git repository let filteredEntries = entries; @@ -236,7 +263,12 @@ export class GlobTool extends BaseTool { } if (!filteredEntries || filteredEntries.length === 0) { - let message = `No files found matching pattern "${params.pattern}" within ${searchDirAbsolute}.`; + let message = `No files found matching pattern "${params.pattern}"`; + if (searchDirectories.length === 1) { + message += ` within ${searchDirectories[0]}`; + } else { + message += ` within ${searchDirectories.length} workspace directories`; + } if (gitIgnoredCount > 0) { message += ` (${gitIgnoredCount} files were git-ignored)`; } @@ -263,7 +295,12 @@ export class GlobTool extends BaseTool { const fileListDescription = sortedAbsolutePaths.join('\n'); const fileCount = sortedAbsolutePaths.length; - let resultMessage = `Found ${fileCount} file(s) matching "${params.pattern}" within ${searchDirAbsolute}`; + let resultMessage = `Found ${fileCount} file(s) matching "${params.pattern}"`; + if (searchDirectories.length === 1) { + resultMessage += ` within ${searchDirectories[0]}`; + } else { + resultMessage += ` across ${searchDirectories.length} workspace directories`; + } if (gitIgnoredCount > 0) { resultMessage += ` (${gitIgnoredCount} additional files were git-ignored)`; } diff --git a/packages/core/src/tools/grep.test.ts b/packages/core/src/tools/grep.test.ts index 01295083..aadc93ae 100644 --- a/packages/core/src/tools/grep.test.ts +++ b/packages/core/src/tools/grep.test.ts @@ -10,6 +10,7 @@ import path from 'path'; import fs from 'fs/promises'; import os from 'os'; import { Config } from '../config/config.js'; +import { createMockWorkspaceContext } from '../test-utils/mockWorkspaceContext.js'; // Mock the child_process module to control grep/git grep behavior vi.mock('child_process', () => ({ @@ -33,6 +34,7 @@ describe('GrepTool', () => { const mockConfig = { getTargetDir: () => tempRootDir, + getWorkspaceContext: () => createMockWorkspaceContext(tempRootDir), } as unknown as Config; beforeEach(async () => { @@ -120,7 +122,7 @@ describe('GrepTool', () => { const params: GrepToolParams = { pattern: 'world' }; const result = await grepTool.execute(params, abortSignal); expect(result.llmContent).toContain( - 'Found 3 matches for pattern "world" in path "."', + 'Found 3 matches for pattern "world" in the workspace directory', ); expect(result.llmContent).toContain('File: fileA.txt'); expect(result.llmContent).toContain('L1: hello world'); @@ -147,7 +149,7 @@ describe('GrepTool', () => { const params: GrepToolParams = { pattern: 'hello', include: '*.js' }; const result = await grepTool.execute(params, abortSignal); expect(result.llmContent).toContain( - 'Found 1 match for pattern "hello" in path "." (filter: "*.js")', + 'Found 1 match for pattern "hello" in the workspace directory (filter: "*.js"):', ); expect(result.llmContent).toContain('File: fileB.js'); expect(result.llmContent).toContain( @@ -179,7 +181,7 @@ describe('GrepTool', () => { const params: GrepToolParams = { pattern: 'nonexistentpattern' }; const result = await grepTool.execute(params, abortSignal); expect(result.llmContent).toContain( - 'No matches found for pattern "nonexistentpattern" in path "."', + 'No matches found for pattern "nonexistentpattern" in the workspace directory.', ); expect(result.returnDisplay).toBe('No matches found'); }); @@ -188,7 +190,7 @@ describe('GrepTool', () => { const params: GrepToolParams = { pattern: 'foo.*bar' }; // Matches 'const foo = "bar";' const result = await grepTool.execute(params, abortSignal); expect(result.llmContent).toContain( - 'Found 1 match for pattern "foo.*bar" in path "."', + 'Found 1 match for pattern "foo.*bar" in the workspace directory:', ); expect(result.llmContent).toContain('File: fileB.js'); expect(result.llmContent).toContain('L1: const foo = "bar";'); @@ -198,7 +200,7 @@ describe('GrepTool', () => { const params: GrepToolParams = { pattern: 'HELLO' }; const result = await grepTool.execute(params, abortSignal); expect(result.llmContent).toContain( - 'Found 2 matches for pattern "HELLO" in path "."', + 'Found 2 matches for pattern "HELLO" in the workspace directory:', ); expect(result.llmContent).toContain('File: fileA.txt'); expect(result.llmContent).toContain('L1: hello world'); @@ -220,6 +222,98 @@ describe('GrepTool', () => { }); }); + describe('multi-directory workspace', () => { + it('should search across all workspace directories when no path is specified', async () => { + // Create additional directory with test files + const secondDir = await fs.mkdtemp( + path.join(os.tmpdir(), 'grep-tool-second-'), + ); + await fs.writeFile( + path.join(secondDir, 'other.txt'), + 'hello from second directory\nworld in second', + ); + await fs.writeFile( + path.join(secondDir, 'another.js'), + 'function world() { return "test"; }', + ); + + // Create a mock config with multiple directories + const multiDirConfig = { + getTargetDir: () => tempRootDir, + getWorkspaceContext: () => + createMockWorkspaceContext(tempRootDir, [secondDir]), + } as unknown as Config; + + const multiDirGrepTool = new GrepTool(multiDirConfig); + const params: GrepToolParams = { pattern: 'world' }; + const result = await multiDirGrepTool.execute(params, abortSignal); + + // Should find matches in both directories + expect(result.llmContent).toContain( + 'Found 5 matches for pattern "world"', + ); + + // Matches from first directory + expect(result.llmContent).toContain('fileA.txt'); + expect(result.llmContent).toContain('L1: hello world'); + expect(result.llmContent).toContain('L2: second line with world'); + expect(result.llmContent).toContain('fileC.txt'); + expect(result.llmContent).toContain('L1: another world in sub dir'); + + // Matches from second directory (with directory name prefix) + const secondDirName = path.basename(secondDir); + expect(result.llmContent).toContain( + `File: ${path.join(secondDirName, 'other.txt')}`, + ); + expect(result.llmContent).toContain('L2: world in second'); + expect(result.llmContent).toContain( + `File: ${path.join(secondDirName, 'another.js')}`, + ); + expect(result.llmContent).toContain('L1: function world()'); + + // Clean up + await fs.rm(secondDir, { recursive: true, force: true }); + }); + + it('should search only specified path within workspace directories', async () => { + // Create additional directory + const secondDir = await fs.mkdtemp( + path.join(os.tmpdir(), 'grep-tool-second-'), + ); + await fs.mkdir(path.join(secondDir, 'sub')); + await fs.writeFile( + path.join(secondDir, 'sub', 'test.txt'), + 'hello from second sub directory', + ); + + // Create a mock config with multiple directories + const multiDirConfig = { + getTargetDir: () => tempRootDir, + getWorkspaceContext: () => + createMockWorkspaceContext(tempRootDir, [secondDir]), + } as unknown as Config; + + const multiDirGrepTool = new GrepTool(multiDirConfig); + + // Search only in the 'sub' directory of the first workspace + const params: GrepToolParams = { pattern: 'world', path: 'sub' }; + const result = await multiDirGrepTool.execute(params, abortSignal); + + // Should only find matches in the specified sub directory + expect(result.llmContent).toContain( + 'Found 1 match for pattern "world" in path "sub"', + ); + expect(result.llmContent).toContain('File: fileC.txt'); + expect(result.llmContent).toContain('L1: another world in sub dir'); + + // Should not contain matches from second directory + expect(result.llmContent).not.toContain('test.txt'); + + // Clean up + await fs.rm(secondDir, { recursive: true, force: true }); + }); + }); + describe('getDescription', () => { it('should generate correct description with pattern only', () => { const params: GrepToolParams = { pattern: 'testPattern' }; @@ -246,6 +340,21 @@ describe('GrepTool', () => { ); }); + it('should indicate searching across all workspace directories when no path specified', () => { + // Create a mock config with multiple directories + const multiDirConfig = { + getTargetDir: () => tempRootDir, + getWorkspaceContext: () => + createMockWorkspaceContext(tempRootDir, ['/another/dir']), + } as unknown as Config; + + const multiDirGrepTool = new GrepTool(multiDirConfig); + const params: GrepToolParams = { pattern: 'testPattern' }; + expect(multiDirGrepTool.getDescription(params)).toBe( + "'testPattern' across all workspace directories", + ); + }); + it('should generate correct description with pattern, include, and path', () => { const params: GrepToolParams = { pattern: 'testPattern', diff --git a/packages/core/src/tools/grep.ts b/packages/core/src/tools/grep.ts index c1f9ecf6..212b5971 100644 --- a/packages/core/src/tools/grep.ts +++ b/packages/core/src/tools/grep.ts @@ -92,22 +92,23 @@ export class GrepTool extends BaseTool { /** * Checks if a path is within the root directory and resolves it. * @param relativePath Path relative to the root directory (or undefined for root). - * @returns The absolute path if valid and exists. + * @returns The absolute path if valid and exists, or null if no path specified (to search all directories). * @throws {Error} If path is outside root, doesn't exist, or isn't a directory. */ - private resolveAndValidatePath(relativePath?: string): string { - const targetPath = path.resolve( - this.config.getTargetDir(), - relativePath || '.', - ); + private resolveAndValidatePath(relativePath?: string): string | null { + // If no path specified, return null to indicate searching all workspace directories + if (!relativePath) { + return null; + } - // Security Check: Ensure the resolved path is still within the root directory. - if ( - !targetPath.startsWith(this.config.getTargetDir()) && - targetPath !== this.config.getTargetDir() - ) { + const targetPath = path.resolve(this.config.getTargetDir(), relativePath); + + // Security Check: Ensure the resolved path is within workspace boundaries + const workspaceContext = this.config.getWorkspaceContext(); + if (!workspaceContext.isPathWithinWorkspace(targetPath)) { + const directories = workspaceContext.getDirectories(); throw new Error( - `Path validation failed: Attempted path "${relativePath || '.'}" resolves outside the allowed root directory "${this.config.getTargetDir()}".`, + `Path validation failed: Attempted path "${relativePath}" resolves outside the allowed workspace directories: ${directories.join(', ')}`, ); } @@ -146,10 +147,13 @@ export class GrepTool extends BaseTool { return `Invalid regular expression pattern provided: ${params.pattern}. Error: ${getErrorMessage(error)}`; } - try { - this.resolveAndValidatePath(params.path); - } catch (error) { - return getErrorMessage(error); + // Only validate path if one is provided + if (params.path) { + try { + this.resolveAndValidatePath(params.path); + } catch (error) { + return getErrorMessage(error); + } } return null; // Parameters are valid @@ -174,44 +178,78 @@ export class GrepTool extends BaseTool { }; } - let searchDirAbs: string; try { - searchDirAbs = this.resolveAndValidatePath(params.path); + const workspaceContext = this.config.getWorkspaceContext(); + const searchDirAbs = this.resolveAndValidatePath(params.path); const searchDirDisplay = params.path || '.'; - const matches: GrepMatch[] = await this.performGrepSearch({ - pattern: params.pattern, - path: searchDirAbs, - include: params.include, - signal, - }); + // Determine which directories to search + let searchDirectories: readonly string[]; + if (searchDirAbs === null) { + // No path specified - search all workspace directories + searchDirectories = workspaceContext.getDirectories(); + } else { + // Specific path provided - search only that directory + searchDirectories = [searchDirAbs]; + } - if (matches.length === 0) { - const noMatchMsg = `No matches found for pattern "${params.pattern}" in path "${searchDirDisplay}"${params.include ? ` (filter: "${params.include}")` : ''}.`; + // Collect matches from all search directories + let allMatches: GrepMatch[] = []; + for (const searchDir of searchDirectories) { + const matches = await this.performGrepSearch({ + pattern: params.pattern, + path: searchDir, + include: params.include, + signal, + }); + + // Add directory prefix if searching multiple directories + if (searchDirectories.length > 1) { + const dirName = path.basename(searchDir); + matches.forEach((match) => { + match.filePath = path.join(dirName, match.filePath); + }); + } + + allMatches = allMatches.concat(matches); + } + + let searchLocationDescription: string; + if (searchDirAbs === null) { + const numDirs = workspaceContext.getDirectories().length; + searchLocationDescription = + numDirs > 1 + ? `across ${numDirs} workspace directories` + : `in the workspace directory`; + } else { + searchLocationDescription = `in path "${searchDirDisplay}"`; + } + + if (allMatches.length === 0) { + const noMatchMsg = `No matches found for pattern "${params.pattern}" ${searchLocationDescription}${params.include ? ` (filter: "${params.include}")` : ''}.`; return { llmContent: noMatchMsg, returnDisplay: `No matches found` }; } - const matchesByFile = matches.reduce( + // Group matches by file + const matchesByFile = allMatches.reduce( (acc, match) => { - const relativeFilePath = - path.relative( - searchDirAbs, - path.resolve(searchDirAbs, match.filePath), - ) || path.basename(match.filePath); - if (!acc[relativeFilePath]) { - acc[relativeFilePath] = []; + const fileKey = match.filePath; + if (!acc[fileKey]) { + acc[fileKey] = []; } - acc[relativeFilePath].push(match); - acc[relativeFilePath].sort((a, b) => a.lineNumber - b.lineNumber); + acc[fileKey].push(match); + acc[fileKey].sort((a, b) => a.lineNumber - b.lineNumber); return acc; }, {} as Record, ); - const matchCount = matches.length; + const matchCount = allMatches.length; const matchTerm = matchCount === 1 ? 'match' : 'matches'; - let llmContent = `Found ${matchCount} ${matchTerm} for pattern "${params.pattern}" in path "${searchDirDisplay}"${params.include ? ` (filter: "${params.include}")` : ''}:\n---\n`; + let llmContent = `Found ${matchCount} ${matchTerm} for pattern "${params.pattern}" ${searchLocationDescription}${params.include ? ` (filter: "${params.include}")` : ''}: +--- +`; for (const filePath in matchesByFile) { llmContent += `File: ${filePath}\n`; @@ -334,6 +372,13 @@ export class GrepTool extends BaseTool { ); description += ` within ${shortenPath(relativePath)}`; } + } else { + // When no path is specified, indicate searching all workspace directories + const workspaceContext = this.config.getWorkspaceContext(); + const directories = workspaceContext.getDirectories(); + if (directories.length > 1) { + description += ` across all workspace directories`; + } } return description; } diff --git a/packages/core/src/tools/ls.test.ts b/packages/core/src/tools/ls.test.ts new file mode 100644 index 00000000..fb99d829 --- /dev/null +++ b/packages/core/src/tools/ls.test.ts @@ -0,0 +1,496 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +/* eslint-disable @typescript-eslint/no-explicit-any */ + +import { describe, it, expect, beforeEach, vi } from 'vitest'; +import fs from 'fs'; +import path from 'path'; + +vi.mock('fs', () => ({ + default: { + statSync: vi.fn(), + readdirSync: vi.fn(), + }, + statSync: vi.fn(), + readdirSync: vi.fn(), +})); +import { LSTool } from './ls.js'; +import { Config } from '../config/config.js'; +import { WorkspaceContext } from '../utils/workspaceContext.js'; +import { FileDiscoveryService } from '../services/fileDiscoveryService.js'; + +describe('LSTool', () => { + let lsTool: LSTool; + let mockConfig: Config; + let mockWorkspaceContext: WorkspaceContext; + let mockFileService: FileDiscoveryService; + const mockPrimaryDir = '/home/user/project'; + const mockSecondaryDir = '/home/user/other-project'; + + beforeEach(() => { + vi.resetAllMocks(); + + // Mock WorkspaceContext + mockWorkspaceContext = { + getDirectories: vi + .fn() + .mockReturnValue([mockPrimaryDir, mockSecondaryDir]), + isPathWithinWorkspace: vi + .fn() + .mockImplementation( + (path) => + path.startsWith(mockPrimaryDir) || + path.startsWith(mockSecondaryDir), + ), + addDirectory: vi.fn(), + } as unknown as WorkspaceContext; + + // Mock FileService + mockFileService = { + shouldGitIgnoreFile: vi.fn().mockReturnValue(false), + shouldGeminiIgnoreFile: vi.fn().mockReturnValue(false), + } as unknown as FileDiscoveryService; + + // Mock Config + mockConfig = { + getTargetDir: vi.fn().mockReturnValue(mockPrimaryDir), + getWorkspaceContext: vi.fn().mockReturnValue(mockWorkspaceContext), + getFileService: vi.fn().mockReturnValue(mockFileService), + getFileFilteringOptions: vi.fn().mockReturnValue({ + respectGitIgnore: true, + respectGeminiIgnore: true, + }), + } as unknown as Config; + + lsTool = new LSTool(mockConfig); + }); + + describe('parameter validation', () => { + it('should accept valid absolute paths within workspace', () => { + const params = { + path: '/home/user/project/src', + }; + + const error = lsTool.validateToolParams(params); + expect(error).toBeNull(); + }); + + it('should reject relative paths', () => { + const params = { + path: './src', + }; + + const error = lsTool.validateToolParams(params); + expect(error).toBe('Path must be absolute: ./src'); + }); + + it('should reject paths outside workspace with clear error message', () => { + const params = { + path: '/etc/passwd', + }; + + const error = lsTool.validateToolParams(params); + expect(error).toBe( + 'Path must be within one of the workspace directories: /home/user/project, /home/user/other-project', + ); + }); + + it('should accept paths in secondary workspace directory', () => { + const params = { + path: '/home/user/other-project/lib', + }; + + const error = lsTool.validateToolParams(params); + expect(error).toBeNull(); + }); + }); + + describe('execute', () => { + it('should list files in a directory', async () => { + const testPath = '/home/user/project/src'; + const mockFiles = ['file1.ts', 'file2.ts', 'subdir']; + const mockStats = { + isDirectory: vi.fn(), + mtime: new Date(), + size: 1024, + }; + + vi.mocked(fs.statSync).mockImplementation((path: any) => { + const pathStr = path.toString(); + if (pathStr === testPath) { + return { isDirectory: () => true } as fs.Stats; + } + // For individual files + if (pathStr.toString().endsWith('subdir')) { + return { ...mockStats, isDirectory: () => true, size: 0 } as fs.Stats; + } + return { ...mockStats, isDirectory: () => false } as fs.Stats; + }); + + vi.mocked(fs.readdirSync).mockReturnValue(mockFiles as any); + + const result = await lsTool.execute( + { path: testPath }, + new AbortController().signal, + ); + + expect(result.llmContent).toContain('[DIR] subdir'); + expect(result.llmContent).toContain('file1.ts'); + expect(result.llmContent).toContain('file2.ts'); + expect(result.returnDisplay).toBe('Listed 3 item(s).'); + }); + + it('should list files from secondary workspace directory', async () => { + const testPath = '/home/user/other-project/lib'; + const mockFiles = ['module1.js', 'module2.js']; + + vi.mocked(fs.statSync).mockImplementation((path: any) => { + if (path.toString() === testPath) { + return { isDirectory: () => true } as fs.Stats; + } + return { + isDirectory: () => false, + mtime: new Date(), + size: 2048, + } as fs.Stats; + }); + + vi.mocked(fs.readdirSync).mockReturnValue(mockFiles as any); + + const result = await lsTool.execute( + { path: testPath }, + new AbortController().signal, + ); + + expect(result.llmContent).toContain('module1.js'); + expect(result.llmContent).toContain('module2.js'); + expect(result.returnDisplay).toBe('Listed 2 item(s).'); + }); + + it('should handle empty directories', async () => { + const testPath = '/home/user/project/empty'; + + vi.mocked(fs.statSync).mockReturnValue({ + isDirectory: () => true, + } as fs.Stats); + vi.mocked(fs.readdirSync).mockReturnValue([]); + + const result = await lsTool.execute( + { path: testPath }, + new AbortController().signal, + ); + + expect(result.llmContent).toBe( + 'Directory /home/user/project/empty is empty.', + ); + expect(result.returnDisplay).toBe('Directory is empty.'); + }); + + it('should respect ignore patterns', async () => { + const testPath = '/home/user/project/src'; + const mockFiles = ['test.js', 'test.spec.js', 'index.js']; + + vi.mocked(fs.statSync).mockImplementation((path: any) => { + const pathStr = path.toString(); + if (pathStr === testPath) { + return { isDirectory: () => true } as fs.Stats; + } + return { + isDirectory: () => false, + mtime: new Date(), + size: 1024, + } as fs.Stats; + }); + vi.mocked(fs.readdirSync).mockReturnValue(mockFiles as any); + + const result = await lsTool.execute( + { path: testPath, ignore: ['*.spec.js'] }, + new AbortController().signal, + ); + + expect(result.llmContent).toContain('test.js'); + expect(result.llmContent).toContain('index.js'); + expect(result.llmContent).not.toContain('test.spec.js'); + expect(result.returnDisplay).toBe('Listed 2 item(s).'); + }); + + it('should respect gitignore patterns', async () => { + const testPath = '/home/user/project/src'; + const mockFiles = ['file1.js', 'file2.js', 'ignored.js']; + + vi.mocked(fs.statSync).mockImplementation((path: any) => { + const pathStr = path.toString(); + if (pathStr === testPath) { + return { isDirectory: () => true } as fs.Stats; + } + return { + isDirectory: () => false, + mtime: new Date(), + size: 1024, + } as fs.Stats; + }); + vi.mocked(fs.readdirSync).mockReturnValue(mockFiles as any); + (mockFileService.shouldGitIgnoreFile as any).mockImplementation( + (path: string) => path.includes('ignored.js'), + ); + + const result = await lsTool.execute( + { path: testPath }, + new AbortController().signal, + ); + + expect(result.llmContent).toContain('file1.js'); + expect(result.llmContent).toContain('file2.js'); + expect(result.llmContent).not.toContain('ignored.js'); + expect(result.returnDisplay).toBe('Listed 2 item(s). (1 git-ignored)'); + }); + + it('should respect geminiignore patterns', async () => { + const testPath = '/home/user/project/src'; + const mockFiles = ['file1.js', 'file2.js', 'private.js']; + + vi.mocked(fs.statSync).mockImplementation((path: any) => { + const pathStr = path.toString(); + if (pathStr === testPath) { + return { isDirectory: () => true } as fs.Stats; + } + return { + isDirectory: () => false, + mtime: new Date(), + size: 1024, + } as fs.Stats; + }); + vi.mocked(fs.readdirSync).mockReturnValue(mockFiles as any); + (mockFileService.shouldGeminiIgnoreFile as any).mockImplementation( + (path: string) => path.includes('private.js'), + ); + + const result = await lsTool.execute( + { path: testPath }, + new AbortController().signal, + ); + + expect(result.llmContent).toContain('file1.js'); + expect(result.llmContent).toContain('file2.js'); + expect(result.llmContent).not.toContain('private.js'); + expect(result.returnDisplay).toBe('Listed 2 item(s). (1 gemini-ignored)'); + }); + + it('should handle non-directory paths', async () => { + const testPath = '/home/user/project/file.txt'; + + vi.mocked(fs.statSync).mockReturnValue({ + isDirectory: () => false, + } as fs.Stats); + + const result = await lsTool.execute( + { path: testPath }, + new AbortController().signal, + ); + + expect(result.llmContent).toContain('Path is not a directory'); + expect(result.returnDisplay).toBe('Error: Path is not a directory.'); + }); + + it('should handle non-existent paths', async () => { + const testPath = '/home/user/project/does-not-exist'; + + vi.mocked(fs.statSync).mockImplementation(() => { + throw new Error('ENOENT: no such file or directory'); + }); + + const result = await lsTool.execute( + { path: testPath }, + new AbortController().signal, + ); + + expect(result.llmContent).toContain('Error listing directory'); + expect(result.returnDisplay).toBe('Error: Failed to list directory.'); + }); + + it('should sort directories first, then files alphabetically', async () => { + const testPath = '/home/user/project/src'; + const mockFiles = ['z-file.ts', 'a-dir', 'b-file.ts', 'c-dir']; + + vi.mocked(fs.statSync).mockImplementation((path: any) => { + if (path.toString() === testPath) { + return { isDirectory: () => true } as fs.Stats; + } + if (path.toString().endsWith('-dir')) { + return { + isDirectory: () => true, + mtime: new Date(), + size: 0, + } as fs.Stats; + } + return { + isDirectory: () => false, + mtime: new Date(), + size: 1024, + } as fs.Stats; + }); + + vi.mocked(fs.readdirSync).mockReturnValue(mockFiles as any); + + const result = await lsTool.execute( + { path: testPath }, + new AbortController().signal, + ); + + const lines = ( + typeof result.llmContent === 'string' ? result.llmContent : '' + ).split('\n'); + const entries = lines.slice(1).filter((line: string) => line.trim()); // Skip header + expect(entries[0]).toBe('[DIR] a-dir'); + expect(entries[1]).toBe('[DIR] c-dir'); + expect(entries[2]).toBe('b-file.ts'); + expect(entries[3]).toBe('z-file.ts'); + }); + + it('should handle permission errors gracefully', async () => { + const testPath = '/home/user/project/restricted'; + + vi.mocked(fs.statSync).mockReturnValue({ + isDirectory: () => true, + } as fs.Stats); + vi.mocked(fs.readdirSync).mockImplementation(() => { + throw new Error('EACCES: permission denied'); + }); + + const result = await lsTool.execute( + { path: testPath }, + new AbortController().signal, + ); + + expect(result.llmContent).toContain('Error listing directory'); + expect(result.llmContent).toContain('permission denied'); + expect(result.returnDisplay).toBe('Error: Failed to list directory.'); + }); + + it('should validate parameters and return error for invalid params', async () => { + const result = await lsTool.execute( + { path: '../outside' }, + new AbortController().signal, + ); + + expect(result.llmContent).toContain('Invalid parameters provided'); + expect(result.returnDisplay).toBe('Error: Failed to execute tool.'); + }); + + it('should handle errors accessing individual files during listing', async () => { + const testPath = '/home/user/project/src'; + const mockFiles = ['accessible.ts', 'inaccessible.ts']; + + vi.mocked(fs.statSync).mockImplementation((path: any) => { + if (path.toString() === testPath) { + return { isDirectory: () => true } as fs.Stats; + } + if (path.toString().endsWith('inaccessible.ts')) { + throw new Error('EACCES: permission denied'); + } + return { + isDirectory: () => false, + mtime: new Date(), + size: 1024, + } as fs.Stats; + }); + + vi.mocked(fs.readdirSync).mockReturnValue(mockFiles as any); + + // Spy on console.error to verify it's called + const consoleErrorSpy = vi + .spyOn(console, 'error') + .mockImplementation(() => {}); + + const result = await lsTool.execute( + { path: testPath }, + new AbortController().signal, + ); + + // Should still list the accessible file + expect(result.llmContent).toContain('accessible.ts'); + expect(result.llmContent).not.toContain('inaccessible.ts'); + expect(result.returnDisplay).toBe('Listed 1 item(s).'); + + // Verify error was logged + expect(consoleErrorSpy).toHaveBeenCalledWith( + expect.stringContaining('Error accessing'), + ); + + consoleErrorSpy.mockRestore(); + }); + }); + + describe('getDescription', () => { + it('should return shortened relative path', () => { + const params = { + path: path.join(mockPrimaryDir, 'deeply', 'nested', 'directory'), + }; + + const description = lsTool.getDescription(params); + expect(description).toBe(path.join('deeply', 'nested', 'directory')); + }); + + it('should handle paths in secondary workspace', () => { + const params = { + path: path.join(mockSecondaryDir, 'lib'), + }; + + const description = lsTool.getDescription(params); + expect(description).toBe(path.join('..', 'other-project', 'lib')); + }); + }); + + describe('workspace boundary validation', () => { + it('should accept paths in primary workspace directory', () => { + const params = { path: `${mockPrimaryDir}/src` }; + expect(lsTool.validateToolParams(params)).toBeNull(); + }); + + it('should accept paths in secondary workspace directory', () => { + const params = { path: `${mockSecondaryDir}/lib` }; + expect(lsTool.validateToolParams(params)).toBeNull(); + }); + + it('should reject paths outside all workspace directories', () => { + const params = { path: '/etc/passwd' }; + const error = lsTool.validateToolParams(params); + expect(error).toContain( + 'Path must be within one of the workspace directories', + ); + expect(error).toContain(mockPrimaryDir); + expect(error).toContain(mockSecondaryDir); + }); + + it('should list files from secondary workspace directory', async () => { + const testPath = `${mockSecondaryDir}/tests`; + const mockFiles = ['test1.spec.ts', 'test2.spec.ts']; + + vi.mocked(fs.statSync).mockImplementation((path: any) => { + if (path.toString() === testPath) { + return { isDirectory: () => true } as fs.Stats; + } + return { + isDirectory: () => false, + mtime: new Date(), + size: 512, + } as fs.Stats; + }); + + vi.mocked(fs.readdirSync).mockReturnValue(mockFiles as any); + + const result = await lsTool.execute( + { path: testPath }, + new AbortController().signal, + ); + + expect(result.llmContent).toContain('test1.spec.ts'); + expect(result.llmContent).toContain('test2.spec.ts'); + expect(result.returnDisplay).toBe('Listed 2 item(s).'); + }); + }); +}); diff --git a/packages/core/src/tools/ls.ts b/packages/core/src/tools/ls.ts index 68a69101..8490f18a 100644 --- a/packages/core/src/tools/ls.ts +++ b/packages/core/src/tools/ls.ts @@ -11,7 +11,6 @@ import { Type } from '@google/genai'; import { SchemaValidator } from '../utils/schemaValidator.js'; import { makeRelative, shortenPath } from '../utils/paths.js'; import { Config, DEFAULT_FILE_FILTERING_OPTIONS } from '../config/config.js'; -import { isWithinRoot } from '../utils/fileUtils.js'; /** * Parameters for the LS tool @@ -129,8 +128,11 @@ export class LSTool extends BaseTool { if (!path.isAbsolute(params.path)) { return `Path must be absolute: ${params.path}`; } - if (!isWithinRoot(params.path, this.config.getTargetDir())) { - return `Path must be within the root directory (${this.config.getTargetDir()}): ${params.path}`; + + const workspaceContext = this.config.getWorkspaceContext(); + if (!workspaceContext.isPathWithinWorkspace(params.path)) { + const directories = workspaceContext.getDirectories(); + return `Path must be within one of the workspace directories: ${directories.join(', ')}`; } return null; } diff --git a/packages/core/src/tools/mcp-client.test.ts b/packages/core/src/tools/mcp-client.test.ts index 4560982c..a8289d3b 100644 --- a/packages/core/src/tools/mcp-client.test.ts +++ b/packages/core/src/tools/mcp-client.test.ts @@ -21,11 +21,14 @@ import { GoogleCredentialProvider } from '../mcp/google-auth-provider.js'; import { AuthProviderType } from '../config/config.js'; import { PromptRegistry } from '../prompts/prompt-registry.js'; +import { DiscoveredMCPTool } from './mcp-tool.js'; + vi.mock('@modelcontextprotocol/sdk/client/stdio.js'); vi.mock('@modelcontextprotocol/sdk/client/index.js'); vi.mock('@google/genai'); vi.mock('../mcp/oauth-provider.js'); vi.mock('../mcp/oauth-token-storage.js'); +vi.mock('./mcp-tool.js'); describe('mcp-client', () => { afterEach(() => { @@ -50,6 +53,52 @@ describe('mcp-client', () => { expect(tools.length).toBe(1); expect(mockedMcpToTool).toHaveBeenCalledOnce(); }); + + it('should log an error if there is an error discovering a tool', async () => { + const mockedClient = {} as unknown as ClientLib.Client; + const consoleErrorSpy = vi + .spyOn(console, 'error') + .mockImplementation(() => { + // no-op + }); + + const testError = new Error('Invalid tool name'); + vi.mocked(DiscoveredMCPTool).mockImplementation( + ( + _mcpCallableTool: GenAiLib.CallableTool, + _serverName: string, + name: string, + ) => { + if (name === 'invalid tool name') { + throw testError; + } + return { name: 'validTool' } as DiscoveredMCPTool; + }, + ); + + vi.mocked(GenAiLib.mcpToTool).mockReturnValue({ + tool: () => + Promise.resolve({ + functionDeclarations: [ + { + name: 'validTool', + }, + { + name: 'invalid tool name', // this will fail validation + }, + ], + }), + } as unknown as GenAiLib.CallableTool); + + const tools = await discoverTools('test-server', {}, mockedClient); + + expect(tools.length).toBe(1); + expect(tools[0].name).toBe('validTool'); + expect(consoleErrorSpy).toHaveBeenCalledOnce(); + expect(consoleErrorSpy).toHaveBeenCalledWith( + `Error discovering tool: 'invalid tool name' from MCP server 'test-server': ${testError.message}`, + ); + }); }); describe('discoverPrompts', () => { diff --git a/packages/core/src/tools/mcp-client.ts b/packages/core/src/tools/mcp-client.ts index d175af1f..00f2197a 100644 --- a/packages/core/src/tools/mcp-client.ts +++ b/packages/core/src/tools/mcp-client.ts @@ -145,20 +145,6 @@ export function getMCPDiscoveryState(): MCPDiscoveryState { return mcpDiscoveryState; } -/** - * Parse www-authenticate header to extract OAuth metadata URI. - * - * @param wwwAuthenticate The www-authenticate header value - * @returns The resource metadata URI if found, null otherwise - */ -function _parseWWWAuthenticate(wwwAuthenticate: string): string | null { - // Parse header like: Bearer realm="MCP Server", resource_metadata_uri="https://..." - const resourceMetadataMatch = wwwAuthenticate.match( - /resource_metadata_uri="([^"]+)"/, - ); - return resourceMetadataMatch ? resourceMetadataMatch[1] : null; -} - /** * Extract WWW-Authenticate header from error message string. * This is a more robust approach than regex matching. @@ -380,33 +366,47 @@ export async function connectAndDiscover( ): Promise { updateMCPServerStatus(mcpServerName, MCPServerStatus.CONNECTING); + let mcpClient: Client | undefined; try { - const mcpClient = await connectToMcpServer( + mcpClient = await connectToMcpServer( mcpServerName, mcpServerConfig, debugMode, ); - try { - updateMCPServerStatus(mcpServerName, MCPServerStatus.CONNECTED); - mcpClient.onerror = (error) => { - console.error(`MCP ERROR (${mcpServerName}):`, error.toString()); - updateMCPServerStatus(mcpServerName, MCPServerStatus.DISCONNECTED); - }; - await discoverPrompts(mcpServerName, mcpClient, promptRegistry); - const tools = await discoverTools( - mcpServerName, - mcpServerConfig, - mcpClient, - ); - for (const tool of tools) { - toolRegistry.registerTool(tool); - } - } catch (error) { - mcpClient.close(); - throw error; + mcpClient.onerror = (error) => { + console.error(`MCP ERROR (${mcpServerName}):`, error.toString()); + updateMCPServerStatus(mcpServerName, MCPServerStatus.DISCONNECTED); + }; + + // Attempt to discover both prompts and tools + const prompts = await discoverPrompts( + mcpServerName, + mcpClient, + promptRegistry, + ); + const tools = await discoverTools( + mcpServerName, + mcpServerConfig, + mcpClient, + ); + + // If we have neither prompts nor tools, it's a failed discovery + if (prompts.length === 0 && tools.length === 0) { + throw new Error('No prompts or tools found on the server.'); + } + + // If we found anything, the server is connected + updateMCPServerStatus(mcpServerName, MCPServerStatus.CONNECTED); + + // Register any discovered tools + for (const tool of tools) { + toolRegistry.registerTool(tool); } } catch (error) { + if (mcpClient) { + mcpClient.close(); + } console.error( `Error connecting to MCP server '${mcpServerName}': ${getErrorMessage( error, @@ -437,30 +437,49 @@ export async function discoverTools( const tool = await mcpCallableTool.tool(); if (!Array.isArray(tool.functionDeclarations)) { - throw new Error(`Server did not return valid function declarations.`); + // This is a valid case for a prompt-only server + return []; } const discoveredTools: DiscoveredMCPTool[] = []; for (const funcDecl of tool.functionDeclarations) { - if (!isEnabled(funcDecl, mcpServerName, mcpServerConfig)) { - continue; - } + try { + if (!isEnabled(funcDecl, mcpServerName, mcpServerConfig)) { + continue; + } - discoveredTools.push( - new DiscoveredMCPTool( - mcpCallableTool, - mcpServerName, - funcDecl.name!, - funcDecl.description ?? '', - funcDecl.parametersJsonSchema ?? { type: 'object', properties: {} }, - mcpServerConfig.timeout ?? MCP_DEFAULT_TIMEOUT_MSEC, - mcpServerConfig.trust, - ), - ); + discoveredTools.push( + new DiscoveredMCPTool( + mcpCallableTool, + mcpServerName, + funcDecl.name!, + funcDecl.description ?? '', + funcDecl.parametersJsonSchema ?? { type: 'object', properties: {} }, + mcpServerConfig.timeout ?? MCP_DEFAULT_TIMEOUT_MSEC, + mcpServerConfig.trust, + ), + ); + } catch (error) { + console.error( + `Error discovering tool: '${ + funcDecl.name + }' from MCP server '${mcpServerName}': ${(error as Error).message}`, + ); + } } return discoveredTools; } catch (error) { - throw new Error(`Error discovering tools: ${error}`); + if ( + error instanceof Error && + !error.message?.includes('Method not found') + ) { + console.error( + `Error discovering tools from ${mcpServerName}: ${getErrorMessage( + error, + )}`, + ); + } + return []; } } @@ -475,7 +494,7 @@ export async function discoverPrompts( mcpServerName: string, mcpClient: Client, promptRegistry: PromptRegistry, -): Promise { +): Promise { try { const response = await mcpClient.request( { method: 'prompts/list', params: {} }, @@ -490,6 +509,7 @@ export async function discoverPrompts( invokeMcpPrompt(mcpServerName, mcpClient, prompt.name, params), }); } + return response.prompts; } catch (error) { // It's okay if this fails, not all servers will have prompts. // Don't log an error if the method is not found, which is a common case. @@ -503,6 +523,7 @@ export async function discoverPrompts( )}`, ); } + return []; } } diff --git a/packages/core/src/tools/memoryTool.test.ts b/packages/core/src/tools/memoryTool.test.ts index 450d8071..75a2c08a 100644 --- a/packages/core/src/tools/memoryTool.test.ts +++ b/packages/core/src/tools/memoryTool.test.ts @@ -15,6 +15,7 @@ import { import * as fs from 'fs/promises'; import * as path from 'path'; import * as os from 'os'; +import { ToolConfirmationOutcome } from './tools.js'; // Mock dependencies vi.mock('fs/promises'); @@ -46,7 +47,7 @@ describe('MemoryTool', () => { }; beforeEach(() => { - vi.mocked(os.homedir).mockReturnValue('/mock/home'); + vi.mocked(os.homedir).mockReturnValue(path.join('/mock', 'home')); mockFsAdapter.readFile.mockReset(); mockFsAdapter.writeFile.mockReset().mockResolvedValue(undefined); mockFsAdapter.mkdir @@ -85,11 +86,11 @@ describe('MemoryTool', () => { }); describe('performAddMemoryEntry (static method)', () => { - const testFilePath = path.join( - '/mock/home', - '.qwen', - DEFAULT_CONTEXT_FILENAME, // Use the default for basic tests - ); + let testFilePath: string; + + beforeEach(() => { + testFilePath = path.join(os.homedir(), '.qwen', DEFAULT_CONTEXT_FILENAME); + }); it('should create section and save a fact if file does not exist', async () => { mockFsAdapter.readFile.mockRejectedValue({ code: 'ENOENT' }); // Simulate file not found @@ -206,7 +207,7 @@ describe('MemoryTool', () => { const result = await memoryTool.execute(params, mockAbortSignal); // Use getCurrentGeminiMdFilename for the default expectation before any setGeminiMdFilename calls in a test const expectedFilePath = path.join( - '/mock/home', + os.homedir(), '.qwen', getCurrentGeminiMdFilename(), // This will be DEFAULT_CONTEXT_FILENAME unless changed by a test ); @@ -262,4 +263,151 @@ describe('MemoryTool', () => { ); }); }); + + describe('shouldConfirmExecute', () => { + let memoryTool: MemoryTool; + + beforeEach(() => { + memoryTool = new MemoryTool(); + // Clear the allowlist before each test + (MemoryTool as unknown as { allowlist: Set }).allowlist.clear(); + // Mock fs.readFile to return empty string (file doesn't exist) + vi.mocked(fs.readFile).mockResolvedValue(''); + }); + + it('should return confirmation details when memory file is not allowlisted', async () => { + const params = { fact: 'Test fact' }; + const result = await memoryTool.shouldConfirmExecute( + params, + mockAbortSignal, + ); + + expect(result).toBeDefined(); + expect(result).not.toBe(false); + + if (result && result.type === 'edit') { + const expectedPath = path.join('~', '.qwen', 'QWEN.md'); + expect(result.title).toBe(`Confirm Memory Save: ${expectedPath}`); + expect(result.fileName).toContain(path.join('mock', 'home', '.qwen')); + expect(result.fileName).toContain('QWEN.md'); + expect(result.fileDiff).toContain('Index: QWEN.md'); + expect(result.fileDiff).toContain('+## Qwen Added Memories'); + expect(result.fileDiff).toContain('+- Test fact'); + expect(result.originalContent).toBe(''); + expect(result.newContent).toContain('## Qwen Added Memories'); + expect(result.newContent).toContain('- Test fact'); + } + }); + + it('should return false when memory file is already allowlisted', async () => { + const params = { fact: 'Test fact' }; + const memoryFilePath = path.join( + os.homedir(), + '.qwen', + getCurrentGeminiMdFilename(), + ); + + // Add the memory file to the allowlist + (MemoryTool as unknown as { allowlist: Set }).allowlist.add( + memoryFilePath, + ); + + const result = await memoryTool.shouldConfirmExecute( + params, + mockAbortSignal, + ); + + expect(result).toBe(false); + }); + + it('should add memory file to allowlist when ProceedAlways is confirmed', async () => { + const params = { fact: 'Test fact' }; + const memoryFilePath = path.join( + os.homedir(), + '.qwen', + getCurrentGeminiMdFilename(), + ); + + const result = await memoryTool.shouldConfirmExecute( + params, + mockAbortSignal, + ); + + expect(result).toBeDefined(); + expect(result).not.toBe(false); + + if (result && result.type === 'edit') { + // Simulate the onConfirm callback + await result.onConfirm(ToolConfirmationOutcome.ProceedAlways); + + // Check that the memory file was added to the allowlist + expect( + (MemoryTool as unknown as { allowlist: Set }).allowlist.has( + memoryFilePath, + ), + ).toBe(true); + } + }); + + it('should not add memory file to allowlist when other outcomes are confirmed', async () => { + const params = { fact: 'Test fact' }; + const memoryFilePath = path.join( + os.homedir(), + '.qwen', + getCurrentGeminiMdFilename(), + ); + + const result = await memoryTool.shouldConfirmExecute( + params, + mockAbortSignal, + ); + + expect(result).toBeDefined(); + expect(result).not.toBe(false); + + if (result && result.type === 'edit') { + // Simulate the onConfirm callback with different outcomes + await result.onConfirm(ToolConfirmationOutcome.ProceedOnce); + expect( + (MemoryTool as unknown as { allowlist: Set }).allowlist.has( + memoryFilePath, + ), + ).toBe(false); + + await result.onConfirm(ToolConfirmationOutcome.Cancel); + expect( + (MemoryTool as unknown as { allowlist: Set }).allowlist.has( + memoryFilePath, + ), + ).toBe(false); + } + }); + + it('should handle existing memory file with content', async () => { + const params = { fact: 'New fact' }; + const existingContent = + 'Some existing content.\n\n## Qwen Added Memories\n- Old fact\n'; + + // Mock fs.readFile to return existing content + vi.mocked(fs.readFile).mockResolvedValue(existingContent); + + const result = await memoryTool.shouldConfirmExecute( + params, + mockAbortSignal, + ); + + expect(result).toBeDefined(); + expect(result).not.toBe(false); + + if (result && result.type === 'edit') { + const expectedPath = path.join('~', '.qwen', 'QWEN.md'); + expect(result.title).toBe(`Confirm Memory Save: ${expectedPath}`); + expect(result.fileDiff).toContain('Index: QWEN.md'); + expect(result.fileDiff).toContain('+- New fact'); + expect(result.originalContent).toBe(existingContent); + expect(result.newContent).toContain('- Old fact'); + expect(result.newContent).toContain('- New fact'); + } + }); + }); }); diff --git a/packages/core/src/tools/memoryTool.ts b/packages/core/src/tools/memoryTool.ts index 029b4b56..4089dc14 100644 --- a/packages/core/src/tools/memoryTool.ts +++ b/packages/core/src/tools/memoryTool.ts @@ -4,11 +4,21 @@ * SPDX-License-Identifier: Apache-2.0 */ -import { BaseTool, Icon, ToolResult } from './tools.js'; +import { + BaseTool, + ToolResult, + ToolEditConfirmationDetails, + ToolConfirmationOutcome, + Icon, +} from './tools.js'; import { FunctionDeclaration, Type } from '@google/genai'; import * as fs from 'fs/promises'; import * as path from 'path'; import { homedir } from 'os'; +import * as Diff from 'diff'; +import { DEFAULT_DIFF_OPTIONS } from './diffOptions.js'; +import { tildeifyPath } from '../utils/paths.js'; +import { ModifiableTool, ModifyContext } from './modifiable-tool.js'; const memoryToolSchemaData: FunctionDeclaration = { name: 'save_memory', @@ -80,6 +90,8 @@ export function getAllGeminiMdFilenames(): string[] { interface SaveMemoryParams { fact: string; + modified_by_user?: boolean; + modified_content?: string; } function getGlobalMemoryFilePath(): string { @@ -98,7 +110,12 @@ function ensureNewlineSeparation(currentContent: string): string { return '\n\n'; } -export class MemoryTool extends BaseTool { +export class MemoryTool + extends BaseTool + implements ModifiableTool +{ + private static readonly allowlist: Set = new Set(); + static readonly Name: string = memoryToolSchemaData.name!; constructor() { super( @@ -110,6 +127,111 @@ export class MemoryTool extends BaseTool { ); } + getDescription(_params: SaveMemoryParams): string { + const memoryFilePath = getGlobalMemoryFilePath(); + return `in ${tildeifyPath(memoryFilePath)}`; + } + + /** + * Reads the current content of the memory file + */ + private async readMemoryFileContent(): Promise { + try { + return await fs.readFile(getGlobalMemoryFilePath(), 'utf-8'); + } catch (err) { + const error = err as Error & { code?: string }; + if (!(error instanceof Error) || error.code !== 'ENOENT') throw err; + return ''; + } + } + + /** + * Computes the new content that would result from adding a memory entry + */ + private computeNewContent(currentContent: string, fact: string): string { + let processedText = fact.trim(); + processedText = processedText.replace(/^(-+\s*)+/, '').trim(); + const newMemoryItem = `- ${processedText}`; + + const headerIndex = currentContent.indexOf(MEMORY_SECTION_HEADER); + + if (headerIndex === -1) { + // Header not found, append header and then the entry + const separator = ensureNewlineSeparation(currentContent); + return ( + currentContent + + `${separator}${MEMORY_SECTION_HEADER}\n${newMemoryItem}\n` + ); + } else { + // Header found, find where to insert the new memory entry + const startOfSectionContent = headerIndex + MEMORY_SECTION_HEADER.length; + let endOfSectionIndex = currentContent.indexOf( + '\n## ', + startOfSectionContent, + ); + if (endOfSectionIndex === -1) { + endOfSectionIndex = currentContent.length; // End of file + } + + const beforeSectionMarker = currentContent + .substring(0, startOfSectionContent) + .trimEnd(); + let sectionContent = currentContent + .substring(startOfSectionContent, endOfSectionIndex) + .trimEnd(); + const afterSectionMarker = currentContent.substring(endOfSectionIndex); + + sectionContent += `\n${newMemoryItem}`; + return ( + `${beforeSectionMarker}\n${sectionContent.trimStart()}\n${afterSectionMarker}`.trimEnd() + + '\n' + ); + } + } + + async shouldConfirmExecute( + params: SaveMemoryParams, + _abortSignal: AbortSignal, + ): Promise { + const memoryFilePath = getGlobalMemoryFilePath(); + const allowlistKey = memoryFilePath; + + if (MemoryTool.allowlist.has(allowlistKey)) { + return false; + } + + // Read current content of the memory file + const currentContent = await this.readMemoryFileContent(); + + // Calculate the new content that will be written to the memory file + const newContent = this.computeNewContent(currentContent, params.fact); + + const fileName = path.basename(memoryFilePath); + const fileDiff = Diff.createPatch( + fileName, + currentContent, + newContent, + 'Current', + 'Proposed', + DEFAULT_DIFF_OPTIONS, + ); + + const confirmationDetails: ToolEditConfirmationDetails = { + type: 'edit', + title: `Confirm Memory Save: ${tildeifyPath(memoryFilePath)}`, + fileName: memoryFilePath, + fileDiff, + originalContent: currentContent, + newContent, + onConfirm: async (outcome: ToolConfirmationOutcome) => { + if (outcome === ToolConfirmationOutcome.ProceedAlways) { + MemoryTool.allowlist.add(allowlistKey); + } + }, + }; + return confirmationDetails; + } + static async performAddMemoryEntry( text: string, memoryFilePath: string, @@ -184,7 +306,7 @@ export class MemoryTool extends BaseTool { params: SaveMemoryParams, _signal: AbortSignal, ): Promise { - const { fact } = params; + const { fact, modified_by_user, modified_content } = params; if (!fact || typeof fact !== 'string' || fact.trim() === '') { const errorMessage = 'Parameter "fact" must be a non-empty string.'; @@ -195,17 +317,44 @@ export class MemoryTool extends BaseTool { } try { - // Use the static method with actual fs promises - await MemoryTool.performAddMemoryEntry(fact, getGlobalMemoryFilePath(), { - readFile: fs.readFile, - writeFile: fs.writeFile, - mkdir: fs.mkdir, - }); - const successMessage = `Okay, I've remembered that: "${fact}"`; - return { - llmContent: JSON.stringify({ success: true, message: successMessage }), - returnDisplay: successMessage, - }; + if (modified_by_user && modified_content !== undefined) { + // User modified the content in external editor, write it directly + await fs.mkdir(path.dirname(getGlobalMemoryFilePath()), { + recursive: true, + }); + await fs.writeFile( + getGlobalMemoryFilePath(), + modified_content, + 'utf-8', + ); + const successMessage = `Okay, I've updated the memory file with your modifications.`; + return { + llmContent: JSON.stringify({ + success: true, + message: successMessage, + }), + returnDisplay: successMessage, + }; + } else { + // Use the normal memory entry logic + await MemoryTool.performAddMemoryEntry( + fact, + getGlobalMemoryFilePath(), + { + readFile: fs.readFile, + writeFile: fs.writeFile, + mkdir: fs.mkdir, + }, + ); + const successMessage = `Okay, I've remembered that: "${fact}"`; + return { + llmContent: JSON.stringify({ + success: true, + message: successMessage, + }), + returnDisplay: successMessage, + }; + } } catch (error) { const errorMessage = error instanceof Error ? error.message : String(error); @@ -221,4 +370,25 @@ export class MemoryTool extends BaseTool { }; } } + + getModifyContext(_abortSignal: AbortSignal): ModifyContext { + return { + getFilePath: (_params: SaveMemoryParams) => getGlobalMemoryFilePath(), + getCurrentContent: async (_params: SaveMemoryParams): Promise => + this.readMemoryFileContent(), + getProposedContent: async (params: SaveMemoryParams): Promise => { + const currentContent = await this.readMemoryFileContent(); + return this.computeNewContent(currentContent, params.fact); + }, + createUpdatedParams: ( + _oldContent: string, + modifiedProposedContent: string, + originalParams: SaveMemoryParams, + ): SaveMemoryParams => ({ + ...originalParams, + modified_by_user: true, + modified_content: modifiedProposedContent, + }), + }; + } } diff --git a/packages/core/src/tools/read-file.test.ts b/packages/core/src/tools/read-file.test.ts index e06c353a..fa1e458c 100644 --- a/packages/core/src/tools/read-file.test.ts +++ b/packages/core/src/tools/read-file.test.ts @@ -12,6 +12,7 @@ import fs from 'fs'; import fsp from 'fs/promises'; import { Config } from '../config/config.js'; import { FileDiscoveryService } from '../services/fileDiscoveryService.js'; +import { createMockWorkspaceContext } from '../test-utils/mockWorkspaceContext.js'; describe('ReadFileTool', () => { let tempRootDir: string; @@ -27,6 +28,7 @@ describe('ReadFileTool', () => { const mockConfigInstance = { getFileService: () => new FileDiscoveryService(tempRootDir), getTargetDir: () => tempRootDir, + getWorkspaceContext: () => createMockWorkspaceContext(tempRootDir), } as unknown as Config; tool = new ReadFileTool(mockConfigInstance); }); @@ -65,8 +67,9 @@ describe('ReadFileTool', () => { it('should return error for path outside root', () => { const outsidePath = path.resolve(os.tmpdir(), 'outside-root.txt'); const params: ReadFileToolParams = { absolute_path: outsidePath }; - expect(tool.validateToolParams(params)).toMatch( - /File path must be within the root directory/, + const error = tool.validateToolParams(params); + expect(error).toContain( + 'File path must be within one of the workspace directories', ); }); @@ -219,7 +222,7 @@ describe('ReadFileTool', () => { 'Line 7', 'Line 8', ].join('\n'), - returnDisplay: '(truncated)', + returnDisplay: 'Read lines 6-8 of 20 from paginated.txt', }); }); @@ -261,4 +264,36 @@ describe('ReadFileTool', () => { }); }); }); + + describe('workspace boundary validation', () => { + it('should validate paths are within workspace root', () => { + const params: ReadFileToolParams = { + absolute_path: path.join(tempRootDir, 'file.txt'), + }; + expect(tool.validateToolParams(params)).toBeNull(); + }); + + it('should reject paths outside workspace root', () => { + const params: ReadFileToolParams = { + absolute_path: '/etc/passwd', + }; + const error = tool.validateToolParams(params); + expect(error).toContain( + 'File path must be within one of the workspace directories', + ); + expect(error).toContain(tempRootDir); + }); + + it('should provide clear error message with workspace directories', () => { + const outsidePath = path.join(os.tmpdir(), 'outside-workspace.txt'); + const params: ReadFileToolParams = { + absolute_path: outsidePath, + }; + const error = tool.validateToolParams(params); + expect(error).toContain( + 'File path must be within one of the workspace directories', + ); + expect(error).toContain(tempRootDir); + }); + }); }); diff --git a/packages/core/src/tools/read-file.ts b/packages/core/src/tools/read-file.ts index 9ba80672..31282c20 100644 --- a/packages/core/src/tools/read-file.ts +++ b/packages/core/src/tools/read-file.ts @@ -10,7 +10,6 @@ import { makeRelative, shortenPath } from '../utils/paths.js'; import { BaseTool, Icon, ToolLocation, ToolResult } from './tools.js'; import { Type } from '@google/genai'; import { - isWithinRoot, processSingleFileContent, getSpecificMimeType, } from '../utils/fileUtils.js'; @@ -86,8 +85,11 @@ export class ReadFileTool extends BaseTool { if (!path.isAbsolute(filePath)) { return `File path must be absolute, but was relative: ${filePath}. You must provide an absolute path.`; } - if (!isWithinRoot(filePath, this.config.getTargetDir())) { - return `File path must be within the root directory (${this.config.getTargetDir()}): ${filePath}`; + + const workspaceContext = this.config.getWorkspaceContext(); + if (!workspaceContext.isPathWithinWorkspace(filePath)) { + const directories = workspaceContext.getDirectories(); + return `File path must be within one of the workspace directories: ${directories.join(', ')}`; } if (params.offset !== undefined && params.offset < 0) { return 'Offset must be a non-negative number'; @@ -145,7 +147,7 @@ export class ReadFileTool extends BaseTool { if (result.error) { return { llmContent: result.error, // The detailed error for LLM - returnDisplay: result.returnDisplay, // User-friendly error + returnDisplay: result.returnDisplay || 'Error reading file', // User-friendly error }; } @@ -163,8 +165,8 @@ export class ReadFileTool extends BaseTool { ); return { - llmContent: result.llmContent, - returnDisplay: result.returnDisplay, + llmContent: result.llmContent || '', + returnDisplay: result.returnDisplay || '', }; } } diff --git a/packages/core/src/tools/read-many-files.test.ts b/packages/core/src/tools/read-many-files.test.ts index 641aa705..68bb9b0e 100644 --- a/packages/core/src/tools/read-many-files.test.ts +++ b/packages/core/src/tools/read-many-files.test.ts @@ -13,6 +13,7 @@ import path from 'path'; import fs from 'fs'; // Actual fs for setup import os from 'os'; import { Config } from '../config/config.js'; +import { WorkspaceContext } from '../utils/workspaceContext.js'; vi.mock('mime-types', () => { const lookup = (filename: string) => { @@ -48,11 +49,11 @@ describe('ReadManyFilesTool', () => { let mockReadFileFn: Mock; beforeEach(async () => { - tempRootDir = fs.mkdtempSync( - path.join(os.tmpdir(), 'read-many-files-root-'), + tempRootDir = fs.realpathSync( + fs.mkdtempSync(path.join(os.tmpdir(), 'read-many-files-root-')), ); - tempDirOutsideRoot = fs.mkdtempSync( - path.join(os.tmpdir(), 'read-many-files-external-'), + tempDirOutsideRoot = fs.realpathSync( + fs.mkdtempSync(path.join(os.tmpdir(), 'read-many-files-external-')), ); fs.writeFileSync(path.join(tempRootDir, '.geminiignore'), 'foo.*'); const fileService = new FileDiscoveryService(tempRootDir); @@ -64,6 +65,8 @@ describe('ReadManyFilesTool', () => { respectGeminiIgnore: true, }), getTargetDir: () => tempRootDir, + getWorkspaceDirs: () => [tempRootDir], + getWorkspaceContext: () => new WorkspaceContext(tempRootDir), } as Partial as Config; tool = new ReadManyFilesTool(mockConfig); @@ -424,5 +427,54 @@ describe('ReadManyFilesTool', () => { expect(result.returnDisplay).not.toContain('foo.quux'); expect(result.returnDisplay).toContain('bar.ts'); }); + + it('should read files from multiple workspace directories', async () => { + const tempDir1 = fs.realpathSync( + fs.mkdtempSync(path.join(os.tmpdir(), 'multi-dir-1-')), + ); + const tempDir2 = fs.realpathSync( + fs.mkdtempSync(path.join(os.tmpdir(), 'multi-dir-2-')), + ); + const fileService = new FileDiscoveryService(tempDir1); + const mockConfig = { + getFileService: () => fileService, + getFileFilteringOptions: () => ({ + respectGitIgnore: true, + respectGeminiIgnore: true, + }), + getWorkspaceContext: () => new WorkspaceContext(tempDir1, [tempDir2]), + getTargetDir: () => tempDir1, + } as Partial as Config; + tool = new ReadManyFilesTool(mockConfig); + + fs.writeFileSync(path.join(tempDir1, 'file1.txt'), 'Content1'); + fs.writeFileSync(path.join(tempDir2, 'file2.txt'), 'Content2'); + + const params = { paths: ['*.txt'] }; + const result = await tool.execute(params, new AbortController().signal); + const content = result.llmContent as string[]; + if (!Array.isArray(content)) { + throw new Error(`llmContent is not an array: ${content}`); + } + const expectedPath1 = path.join(tempDir1, 'file1.txt'); + const expectedPath2 = path.join(tempDir2, 'file2.txt'); + + expect( + content.some((c) => + c.includes(`--- ${expectedPath1} ---\n\nContent1\n\n`), + ), + ).toBe(true); + expect( + content.some((c) => + c.includes(`--- ${expectedPath2} ---\n\nContent2\n\n`), + ), + ).toBe(true); + expect(result.returnDisplay).toContain( + 'Successfully read and concatenated content from **2 file(s)**', + ); + + fs.rmSync(tempDir1, { recursive: true, force: true }); + fs.rmSync(tempDir2, { recursive: true, force: true }); + }); }); }); diff --git a/packages/core/src/tools/read-many-files.ts b/packages/core/src/tools/read-many-files.ts index 94ec1a68..771577ec 100644 --- a/packages/core/src/tools/read-many-files.ts +++ b/packages/core/src/tools/read-many-files.ts @@ -302,17 +302,27 @@ Use this tool when the user's query implies needing the content of several files } try { - const patterns = searchPatterns.map((p) => p.replace(/\\/g, '/')); - const entries: string[] = await glob(patterns, { - cwd: this.config.getTargetDir(), - ignore: effectiveExcludes, - nodir: true, - dot: true, - absolute: true, - nocase: true, - signal, - withFileTypes: false, - }); + const allEntries = new Set(); + const workspaceDirs = this.config.getWorkspaceContext().getDirectories(); + + for (const dir of workspaceDirs) { + const entriesInDir = await glob( + searchPatterns.map((p) => p.replace(/\\/g, '/')), + { + cwd: dir, + ignore: effectiveExcludes, + nodir: true, + dot: true, + absolute: true, + nocase: true, + signal, + }, + ); + for (const entry of entriesInDir) { + allEntries.add(entry); + } + } + const entries = Array.from(allEntries); const gitFilteredEntries = fileFilteringOptions.respectGitIgnore ? fileDiscovery @@ -345,11 +355,15 @@ Use this tool when the user's query implies needing the content of several files let geminiIgnoredCount = 0; for (const absoluteFilePath of entries) { - // Security check: ensure the glob library didn't return something outside targetDir. - if (!absoluteFilePath.startsWith(this.config.getTargetDir())) { + // Security check: ensure the glob library didn't return something outside the workspace. + if ( + !this.config + .getWorkspaceContext() + .isPathWithinWorkspace(absoluteFilePath) + ) { skippedFiles.push({ path: absoluteFilePath, - reason: `Security: Glob library returned path outside target directory. Base: ${this.config.getTargetDir()}, Path: ${absoluteFilePath}`, + reason: `Security: Glob library returned path outside workspace. Path: ${absoluteFilePath}`, }); continue; } diff --git a/packages/core/src/tools/shell.test.ts b/packages/core/src/tools/shell.test.ts index 55364197..7f237e3d 100644 --- a/packages/core/src/tools/shell.test.ts +++ b/packages/core/src/tools/shell.test.ts @@ -37,6 +37,7 @@ import * as crypto from 'crypto'; import * as summarizer from '../utils/summarizer.js'; import { ToolConfirmationOutcome } from './tools.js'; import { OUTPUT_UPDATE_INTERVAL_MS } from './shell.js'; +import { createMockWorkspaceContext } from '../test-utils/mockWorkspaceContext.js'; describe('ShellTool', () => { let shellTool: ShellTool; @@ -53,6 +54,7 @@ describe('ShellTool', () => { getDebugMode: vi.fn().mockReturnValue(false), getTargetDir: vi.fn().mockReturnValue('/test/dir'), getSummarizeToolOutputConfig: vi.fn().mockReturnValue(undefined), + getWorkspaceContext: () => createMockWorkspaceContext('.'), getGeminiClient: vi.fn(), } as unknown as Config; @@ -105,7 +107,7 @@ describe('ShellTool', () => { vi.mocked(fs.existsSync).mockReturnValue(false); expect( shellTool.validateToolParams({ command: 'ls', directory: 'rel/path' }), - ).toBe('Directory must exist.'); + ).toBe("Directory 'rel/path' is not a registered workspace directory."); }); }); @@ -385,3 +387,37 @@ describe('ShellTool', () => { }); }); }); + +describe('validateToolParams', () => { + it('should return null for valid directory', () => { + const config = { + getCoreTools: () => undefined, + getExcludeTools: () => undefined, + getTargetDir: () => '/root', + getWorkspaceContext: () => + createMockWorkspaceContext('/root', ['/users/test']), + } as unknown as Config; + const shellTool = new ShellTool(config); + const result = shellTool.validateToolParams({ + command: 'ls', + directory: 'test', + }); + expect(result).toBeNull(); + }); + + it('should return error for directory outside workspace', () => { + const config = { + getCoreTools: () => undefined, + getExcludeTools: () => undefined, + getTargetDir: () => '/root', + getWorkspaceContext: () => + createMockWorkspaceContext('/root', ['/users/test']), + } as unknown as Config; + const shellTool = new ShellTool(config); + const result = shellTool.validateToolParams({ + command: 'ls', + directory: 'test2', + }); + expect(result).toContain('is not a registered workspace directory'); + }); +}); diff --git a/packages/core/src/tools/shell.ts b/packages/core/src/tools/shell.ts index 02fcbb7f..96423af1 100644 --- a/packages/core/src/tools/shell.ts +++ b/packages/core/src/tools/shell.ts @@ -124,14 +124,19 @@ export class ShellTool extends BaseTool { } if (params.directory) { if (path.isAbsolute(params.directory)) { - return 'Directory cannot be absolute. Must be relative to the project root directory.'; + return 'Directory cannot be absolute. Please refer to workspace directories by their name.'; } - const directory = path.resolve( - this.config.getTargetDir(), - params.directory, + const workspaceDirs = this.config.getWorkspaceContext().getDirectories(); + const matchingDirs = workspaceDirs.filter( + (dir) => path.basename(dir) === params.directory, ); - if (!fs.existsSync(directory)) { - return 'Directory must exist.'; + + if (matchingDirs.length === 0) { + return `Directory '${params.directory}' is not a registered workspace directory.`; + } + + if (matchingDirs.length > 1) { + return `Directory name '${params.directory}' is ambiguous as it matches multiple workspace directories.`; } } return null; diff --git a/packages/core/src/tools/tool-error.ts b/packages/core/src/tools/tool-error.ts new file mode 100644 index 00000000..38caa1da --- /dev/null +++ b/packages/core/src/tools/tool-error.ts @@ -0,0 +1,28 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +/** + * A type-safe enum for tool-related errors. + */ +export enum ToolErrorType { + // General Errors + INVALID_TOOL_PARAMS = 'invalid_tool_params', + UNKNOWN = 'unknown', + UNHANDLED_EXCEPTION = 'unhandled_exception', + TOOL_NOT_REGISTERED = 'tool_not_registered', + + // File System Errors + FILE_NOT_FOUND = 'file_not_found', + FILE_WRITE_FAILURE = 'file_write_failure', + READ_CONTENT_FAILURE = 'read_content_failure', + ATTEMPT_TO_CREATE_EXISTING_FILE = 'attempt_to_create_existing_file', + + // Edit-specific Errors + EDIT_PREPARATION_FAILURE = 'edit_preparation_failure', + EDIT_NO_OCCURRENCE_FOUND = 'edit_no_occurrence_found', + EDIT_EXPECTED_OCCURRENCE_MISMATCH = 'edit_expected_occurrence_mismatch', + EDIT_NO_CHANGE = 'edit_no_change', +} diff --git a/packages/core/src/tools/tool-registry.test.ts b/packages/core/src/tools/tool-registry.test.ts index b3fdd7a3..de7c6309 100644 --- a/packages/core/src/tools/tool-registry.test.ts +++ b/packages/core/src/tools/tool-registry.test.ts @@ -30,6 +30,10 @@ import { Schema, } from '@google/genai'; import { spawn } from 'node:child_process'; +import { IdeClient } from '../ide/ide-client.js'; +import fs from 'node:fs'; + +vi.mock('node:fs'); // Use vi.hoisted to define the mock function so it can be used in the vi.mock factory const mockDiscoverMcpTools = vi.hoisted(() => vi.fn()); @@ -136,6 +140,7 @@ const baseConfigParams: ConfigParameters = { geminiMdFileCount: 0, approvalMode: ApprovalMode.DEFAULT, sessionId: 'test-session-id', + ideClient: IdeClient.getInstance(false), }; describe('ToolRegistry', () => { @@ -144,6 +149,10 @@ describe('ToolRegistry', () => { let mockConfigGetToolDiscoveryCommand: ReturnType; beforeEach(() => { + vi.mocked(fs.existsSync).mockReturnValue(true); + vi.mocked(fs.statSync).mockReturnValue({ + isDirectory: () => true, + } as fs.Stats); config = new Config(baseConfigParams); toolRegistry = new ToolRegistry(config); vi.spyOn(console, 'warn').mockImplementation(() => {}); diff --git a/packages/core/src/tools/tools.ts b/packages/core/src/tools/tools.ts index 0d7b402a..0e3ffabf 100644 --- a/packages/core/src/tools/tools.ts +++ b/packages/core/src/tools/tools.ts @@ -5,6 +5,7 @@ */ import { FunctionDeclaration, PartListUnion, Schema } from '@google/genai'; +import { ToolErrorType } from './tool-error.js'; /** * Interface representing the base Tool functionality @@ -217,6 +218,14 @@ export interface ToolResult { * For now, we keep it as the core logic in ReadFileTool currently produces it. */ returnDisplay: ToolResultDisplay; + + /** + * If this property is present, the tool call is considered a failure. + */ + error?: { + message: string; // raw error message + type?: ToolErrorType; // An optional machine-readable error type (e.g., 'FILE_NOT_FOUND'). + }; } export type ToolResultDisplay = string | FileDiff; diff --git a/packages/core/src/tools/write-file.test.ts b/packages/core/src/tools/write-file.test.ts index c33b5fa2..fe662a02 100644 --- a/packages/core/src/tools/write-file.test.ts +++ b/packages/core/src/tools/write-file.test.ts @@ -13,7 +13,7 @@ import { vi, type Mocked, } from 'vitest'; -import { WriteFileTool } from './write-file.js'; +import { WriteFileTool, WriteFileToolParams } from './write-file.js'; import { FileDiff, ToolConfirmationOutcome, @@ -31,6 +31,7 @@ import { ensureCorrectFileContent, CorrectedEditResult, } from '../utils/editCorrector.js'; +import { createMockWorkspaceContext } from '../test-utils/mockWorkspaceContext.js'; const rootDir = path.resolve(os.tmpdir(), 'gemini-cli-test-root'); @@ -54,6 +55,7 @@ const mockConfigInternal = { getApprovalMode: vi.fn(() => ApprovalMode.DEFAULT), setApprovalMode: vi.fn(), getGeminiClient: vi.fn(), // Initialize as a plain mock function + getWorkspaceContext: () => createMockWorkspaceContext(rootDir), getApiKey: () => 'test-key', getModel: () => 'test-model', getSandbox: () => false, @@ -83,6 +85,7 @@ describe('WriteFileTool', () => { let tempDir: string; beforeEach(() => { + vi.clearAllMocks(); // Create a unique temporary directory for files created outside the root tempDir = fs.mkdtempSync( path.join(os.tmpdir(), 'write-file-test-external-'), @@ -98,6 +101,11 @@ describe('WriteFileTool', () => { ) as Mocked; vi.mocked(GeminiClient).mockImplementation(() => mockGeminiClientInstance); + vi.mocked(ensureCorrectEdit).mockImplementation(mockEnsureCorrectEdit); + vi.mocked(ensureCorrectFileContent).mockImplementation( + mockEnsureCorrectFileContent, + ); + // Now that mockGeminiClientInstance is initialized, set the mock implementation for getGeminiClient mockConfigInternal.getGeminiClient.mockReturnValue( mockGeminiClientInstance, @@ -177,8 +185,9 @@ describe('WriteFileTool', () => { file_path: outsidePath, content: 'hello', }; - expect(tool.validateToolParams(params)).toMatch( - /File path must be within the root directory/, + const error = tool.validateToolParams(params); + expect(error).toContain( + 'File path must be within one of the workspace directories', ); }); @@ -193,6 +202,32 @@ describe('WriteFileTool', () => { `Path is a directory, not a file: ${dirAsFilePath}`, ); }); + + it('should return error if the content is null', () => { + const dirAsFilePath = path.join(rootDir, 'a_directory'); + fs.mkdirSync(dirAsFilePath); + const params = { + file_path: dirAsFilePath, + content: null, + } as unknown as WriteFileToolParams; // Intentionally non-conforming + expect(tool.validateToolParams(params)).toMatch( + `params/content must be string`, + ); + }); + }); + + describe('getDescription', () => { + it('should return error if the file_path is empty', () => { + const dirAsFilePath = path.join(rootDir, 'a_directory'); + fs.mkdirSync(dirAsFilePath); + const params = { + file_path: '', + content: '', + }; + expect(tool.getDescription(params)).toMatch( + `Model did not provide valid parameters for write file tool, missing or empty "file_path"`, + ); + }); }); describe('_getCorrectedFileContent', () => { @@ -427,8 +462,8 @@ describe('WriteFileTool', () => { const params = { file_path: outsidePath, content: 'test' }; const result = await tool.execute(params, abortSignal); expect(result.llmContent).toMatch(/Error: Invalid parameters provided/); - expect(result.returnDisplay).toMatch( - /Error: File path must be within the root directory/, + expect(result.returnDisplay).toContain( + 'Error: File path must be within one of the workspace directories', ); }); @@ -616,4 +651,39 @@ describe('WriteFileTool', () => { expect(result.llmContent).not.toMatch(/User modified the `content`/); }); }); + + describe('workspace boundary validation', () => { + it('should validate paths are within workspace root', () => { + const params = { + file_path: path.join(rootDir, 'file.txt'), + content: 'test content', + }; + expect(tool.validateToolParams(params)).toBeNull(); + }); + + it('should reject paths outside workspace root', () => { + const params = { + file_path: '/etc/passwd', + content: 'malicious', + }; + const error = tool.validateToolParams(params); + expect(error).toContain( + 'File path must be within one of the workspace directories', + ); + expect(error).toContain(rootDir); + }); + + it('should provide clear error message with workspace directories', () => { + const outsidePath = path.join(tempDir, 'outside-root.txt'); + const params = { + file_path: outsidePath, + content: 'test', + }; + const error = tool.validateToolParams(params); + expect(error).toContain( + 'File path must be within one of the workspace directories', + ); + expect(error).toContain(rootDir); + }); + }); }); diff --git a/packages/core/src/tools/write-file.ts b/packages/core/src/tools/write-file.ts index ae37ca8a..1cb1a917 100644 --- a/packages/core/src/tools/write-file.ts +++ b/packages/core/src/tools/write-file.ts @@ -27,7 +27,7 @@ import { } from '../utils/editCorrector.js'; import { DEFAULT_DIFF_OPTIONS } from './diffOptions.js'; import { ModifiableTool, ModifyContext } from './modifiable-tool.js'; -import { getSpecificMimeType, isWithinRoot } from '../utils/fileUtils.js'; +import { getSpecificMimeType } from '../utils/fileUtils.js'; import { recordFileOperationMetric, FileOperation, @@ -105,8 +105,11 @@ export class WriteFileTool if (!path.isAbsolute(filePath)) { return `File path must be absolute: ${filePath}`; } - if (!isWithinRoot(filePath, this.config.getTargetDir())) { - return `File path must be within the root directory (${this.config.getTargetDir()}): ${filePath}`; + + const workspaceContext = this.config.getWorkspaceContext(); + if (!workspaceContext.isPathWithinWorkspace(filePath)) { + const directories = workspaceContext.getDirectories(); + return `File path must be within one of the workspace directories: ${directories.join(', ')}`; } try { @@ -128,8 +131,8 @@ export class WriteFileTool } getDescription(params: WriteFileToolParams): string { - if (!params.file_path || !params.content) { - return `Model did not provide valid parameters for write file tool`; + if (!params.file_path) { + return `Model did not provide valid parameters for write file tool, missing or empty "file_path"`; } const relativePath = makeRelative( params.file_path, diff --git a/packages/core/src/utils/bfsFileSearch.test.ts b/packages/core/src/utils/bfsFileSearch.test.ts index 63198a8d..ce19f80e 100644 --- a/packages/core/src/utils/bfsFileSearch.test.ts +++ b/packages/core/src/utils/bfsFileSearch.test.ts @@ -189,4 +189,81 @@ describe('bfsFileSearch', () => { expect(result.sort()).toEqual([target1, target2].sort()); }); }); + + it('should perform parallel directory scanning efficiently (performance test)', async () => { + // Create a more complex directory structure for performance testing + console.log('\n🚀 Testing Parallel BFS Performance...'); + + // Create 50 directories with multiple levels for faster test execution + for (let i = 0; i < 50; i++) { + await createEmptyDir(`dir${i}`); + await createEmptyDir(`dir${i}`, 'subdir1'); + await createEmptyDir(`dir${i}`, 'subdir2'); + await createEmptyDir(`dir${i}`, 'subdir1', 'deep'); + if (i < 10) { + // Add target files in some directories + await createTestFile('content', `dir${i}`, 'GEMINI.md'); + await createTestFile('content', `dir${i}`, 'subdir1', 'GEMINI.md'); + } + } + + // Run multiple iterations to ensure consistency + const iterations = 3; + const durations: number[] = []; + let foundFiles = 0; + let firstResultSorted: string[] | undefined; + + for (let i = 0; i < iterations; i++) { + const searchStartTime = performance.now(); + const result = await bfsFileSearch(testRootDir, { + fileName: 'GEMINI.md', + maxDirs: 200, + debug: false, + }); + const duration = performance.now() - searchStartTime; + durations.push(duration); + + // Verify consistency: all iterations should find the exact same files + if (firstResultSorted === undefined) { + foundFiles = result.length; + firstResultSorted = result.sort(); + } else { + expect(result.sort()).toEqual(firstResultSorted); + } + + console.log(`📊 Iteration ${i + 1}: ${duration.toFixed(2)}ms`); + } + + const avgDuration = durations.reduce((a, b) => a + b, 0) / durations.length; + const maxDuration = Math.max(...durations); + const minDuration = Math.min(...durations); + + console.log(`📊 Average Duration: ${avgDuration.toFixed(2)}ms`); + console.log( + `📊 Min/Max Duration: ${minDuration.toFixed(2)}ms / ${maxDuration.toFixed(2)}ms`, + ); + console.log(`📁 Found ${foundFiles} GEMINI.md files`); + console.log( + `🏎️ Processing ~${Math.round(200 / (avgDuration / 1000))} dirs/second`, + ); + + // Verify we found the expected files + expect(foundFiles).toBe(20); // 10 dirs * 2 files each + + // Performance expectation: check consistency rather than absolute time + const variance = maxDuration - minDuration; + const consistencyRatio = variance / avgDuration; + + // Ensure reasonable performance (generous limit for CI environments) + expect(avgDuration).toBeLessThan(2000); // Very generous limit + + // Ensure consistency across runs (variance should not be too high) + // More tolerant in CI environments where performance can be variable + const maxConsistencyRatio = process.env.CI ? 3.0 : 1.5; + expect(consistencyRatio).toBeLessThan(maxConsistencyRatio); // Max variance should be reasonable + + console.log( + `✅ Performance test passed: avg=${avgDuration.toFixed(2)}ms, consistency=${(consistencyRatio * 100).toFixed(1)}% (threshold: ${(maxConsistencyRatio * 100).toFixed(0)}%)`, + ); + }); }); diff --git a/packages/core/src/utils/bfsFileSearch.ts b/packages/core/src/utils/bfsFileSearch.ts index 790521e0..c5b82f2f 100644 --- a/packages/core/src/utils/bfsFileSearch.ts +++ b/packages/core/src/utils/bfsFileSearch.ts @@ -6,7 +6,6 @@ import * as fs from 'fs/promises'; import * as path from 'path'; -import { Dirent } from 'fs'; import { FileDiscoveryService } from '../services/fileDiscoveryService.js'; import { FileFilteringOptions } from '../config/config.js'; // Simple console logger for now. @@ -47,45 +46,76 @@ export async function bfsFileSearch( const queue: string[] = [rootDir]; const visited = new Set(); let scannedDirCount = 0; + let queueHead = 0; // Pointer-based queue head to avoid expensive splice operations - while (queue.length > 0 && scannedDirCount < maxDirs) { - const currentDir = queue.shift()!; - if (visited.has(currentDir)) { - continue; + // Convert ignoreDirs array to Set for O(1) lookup performance + const ignoreDirsSet = new Set(ignoreDirs); + + // Process directories in parallel batches for maximum performance + const PARALLEL_BATCH_SIZE = 15; // Parallel processing batch size for optimal performance + + while (queueHead < queue.length && scannedDirCount < maxDirs) { + // Fill batch with unvisited directories up to the desired size + const batchSize = Math.min(PARALLEL_BATCH_SIZE, maxDirs - scannedDirCount); + const currentBatch = []; + while (currentBatch.length < batchSize && queueHead < queue.length) { + const currentDir = queue[queueHead]; + queueHead++; + if (!visited.has(currentDir)) { + visited.add(currentDir); + currentBatch.push(currentDir); + } } - visited.add(currentDir); - scannedDirCount++; + scannedDirCount += currentBatch.length; + + if (currentBatch.length === 0) continue; if (debug) { - logger.debug(`Scanning [${scannedDirCount}/${maxDirs}]: ${currentDir}`); + logger.debug( + `Scanning [${scannedDirCount}/${maxDirs}]: batch of ${currentBatch.length}`, + ); } - let entries: Dirent[]; - try { - entries = await fs.readdir(currentDir, { withFileTypes: true }); - } catch { - // Ignore errors for directories we can't read (e.g., permissions) - continue; - } - - for (const entry of entries) { - const fullPath = path.join(currentDir, entry.name); - if ( - fileService?.shouldIgnoreFile(fullPath, { - respectGitIgnore: options.fileFilteringOptions?.respectGitIgnore, - respectGeminiIgnore: - options.fileFilteringOptions?.respectGeminiIgnore, - }) - ) { - continue; - } - - if (entry.isDirectory()) { - if (!ignoreDirs.includes(entry.name)) { - queue.push(fullPath); + // Read directories in parallel instead of one by one + const readPromises = currentBatch.map(async (currentDir) => { + try { + const entries = await fs.readdir(currentDir, { withFileTypes: true }); + return { currentDir, entries }; + } catch (error) { + // Warn user that a directory could not be read, as this affects search results. + const message = (error as Error)?.message ?? 'Unknown error'; + console.warn( + `[WARN] Skipping unreadable directory: ${currentDir} (${message})`, + ); + if (debug) { + logger.debug(`Full error for ${currentDir}:`, error); + } + return { currentDir, entries: [] }; + } + }); + + const results = await Promise.all(readPromises); + + for (const { currentDir, entries } of results) { + for (const entry of entries) { + const fullPath = path.join(currentDir, entry.name); + if ( + fileService?.shouldIgnoreFile(fullPath, { + respectGitIgnore: options.fileFilteringOptions?.respectGitIgnore, + respectGeminiIgnore: + options.fileFilteringOptions?.respectGeminiIgnore, + }) + ) { + continue; + } + + if (entry.isDirectory()) { + if (!ignoreDirsSet.has(entry.name)) { + queue.push(fullPath); + } + } else if (entry.isFile() && entry.name === fileName) { + foundFiles.push(fullPath); } - } else if (entry.isFile() && entry.name === fileName) { - foundFiles.push(fullPath); } } } diff --git a/packages/core/src/utils/editCorrector.ts b/packages/core/src/utils/editCorrector.ts index a770c491..0ef8d4fe 100644 --- a/packages/core/src/utils/editCorrector.ts +++ b/packages/core/src/utils/editCorrector.ts @@ -17,14 +17,14 @@ import { ReadFileTool } from '../tools/read-file.js'; import { ReadManyFilesTool } from '../tools/read-many-files.js'; import { GrepTool } from '../tools/grep.js'; import { LruCache } from './LruCache.js'; -import { DEFAULT_GEMINI_FLASH_MODEL } from '../config/models.js'; +import { DEFAULT_GEMINI_FLASH_LITE_MODEL } from '../config/models.js'; import { isFunctionResponse, isFunctionCall, } from '../utils/messageInspectors.js'; import * as fs from 'fs'; -const EditModel = DEFAULT_GEMINI_FLASH_MODEL; +const EditModel = DEFAULT_GEMINI_FLASH_LITE_MODEL; const EditConfig: GenerateContentConfig = { thinkingConfig: { thinkingBudget: 0, diff --git a/packages/core/src/utils/editor.test.ts b/packages/core/src/utils/editor.test.ts index a86d6f59..203223ae 100644 --- a/packages/core/src/utils/editor.test.ts +++ b/packages/core/src/utils/editor.test.ts @@ -70,6 +70,7 @@ describe('editor utils', () => { { editor: 'vim', commands: ['vim'], win32Commands: ['vim'] }, { editor: 'neovim', commands: ['nvim'], win32Commands: ['nvim'] }, { editor: 'zed', commands: ['zed', 'zeditor'], win32Commands: ['zed'] }, + { editor: 'emacs', commands: ['emacs'], win32Commands: ['emacs.exe'] }, ]; for (const { editor, commands, win32Commands } of testCases) { @@ -297,6 +298,14 @@ describe('editor utils', () => { }); } + it('should return the correct command for emacs', () => { + const command = getDiffCommand('old.txt', 'new.txt', 'emacs'); + expect(command).toEqual({ + command: 'emacs', + args: ['--eval', '(ediff "old.txt" "new.txt")'], + }); + }); + it('should return null for an unsupported editor', () => { // @ts-expect-error Testing unsupported editor const command = getDiffCommand('old.txt', 'new.txt', 'foobar'); @@ -372,7 +381,7 @@ describe('editor utils', () => { }); } - const execSyncEditors: EditorType[] = ['vim', 'neovim']; + const execSyncEditors: EditorType[] = ['vim', 'neovim', 'emacs']; for (const editor of execSyncEditors) { it(`should call execSync for ${editor} on non-windows`, async () => { Object.defineProperty(process, 'platform', { value: 'linux' }); @@ -425,6 +434,15 @@ describe('editor utils', () => { expect(allowEditorTypeInSandbox('vim')).toBe(true); }); + it('should allow emacs in sandbox mode', () => { + process.env.SANDBOX = 'sandbox'; + expect(allowEditorTypeInSandbox('emacs')).toBe(true); + }); + + it('should allow emacs when not in sandbox mode', () => { + expect(allowEditorTypeInSandbox('emacs')).toBe(true); + }); + it('should allow neovim in sandbox mode', () => { process.env.SANDBOX = 'sandbox'; expect(allowEditorTypeInSandbox('neovim')).toBe(true); @@ -490,6 +508,12 @@ describe('editor utils', () => { expect(isEditorAvailable('vim')).toBe(true); }); + it('should return true for emacs when installed and in sandbox mode', () => { + (execSync as Mock).mockReturnValue(Buffer.from('/usr/bin/emacs')); + process.env.SANDBOX = 'sandbox'; + expect(isEditorAvailable('emacs')).toBe(true); + }); + it('should return true for neovim when installed and in sandbox mode', () => { (execSync as Mock).mockReturnValue(Buffer.from('/usr/bin/nvim')); process.env.SANDBOX = 'sandbox'; diff --git a/packages/core/src/utils/editor.ts b/packages/core/src/utils/editor.ts index 2d65d525..704d1cbb 100644 --- a/packages/core/src/utils/editor.ts +++ b/packages/core/src/utils/editor.ts @@ -13,7 +13,8 @@ export type EditorType = | 'cursor' | 'vim' | 'neovim' - | 'zed'; + | 'zed' + | 'emacs'; function isValidEditorType(editor: string): editor is EditorType { return [ @@ -24,6 +25,7 @@ function isValidEditorType(editor: string): editor is EditorType { 'vim', 'neovim', 'zed', + 'emacs', ].includes(editor); } @@ -59,6 +61,7 @@ const editorCommands: Record< vim: { win32: ['vim'], default: ['vim'] }, neovim: { win32: ['nvim'], default: ['nvim'] }, zed: { win32: ['zed'], default: ['zed', 'zeditor'] }, + emacs: { win32: ['emacs.exe'], default: ['emacs'] }, }; export function checkHasEditorType(editor: EditorType): boolean { @@ -73,6 +76,7 @@ export function allowEditorTypeInSandbox(editor: EditorType): boolean { if (['vscode', 'vscodium', 'windsurf', 'cursor', 'zed'].includes(editor)) { return notUsingSandbox; } + // For terminal-based editors like vim and emacs, allow in sandbox. return true; } @@ -141,6 +145,11 @@ export function getDiffCommand( newPath, ], }; + case 'emacs': + return { + command: 'emacs', + args: ['--eval', `(ediff "${oldPath}" "${newPath}")`], + }; default: return null; } @@ -190,6 +199,7 @@ export async function openDiff( }); case 'vim': + case 'emacs': case 'neovim': { // Use execSync for terminal-based editors const command = diff --git a/packages/core/src/utils/fileUtils.test.ts b/packages/core/src/utils/fileUtils.test.ts index b8e75561..ca121bca 100644 --- a/packages/core/src/utils/fileUtils.test.ts +++ b/packages/core/src/utils/fileUtils.test.ts @@ -420,7 +420,7 @@ describe('fileUtils', () => { expect(result.llmContent).toContain( '[File content truncated: showing lines 6-10 of 20 total lines. Use offset/limit parameters to view more.]', ); - expect(result.returnDisplay).toBe('(truncated)'); + expect(result.returnDisplay).toBe('Read lines 6-10 of 20 from test.txt'); expect(result.isTruncated).toBe(true); expect(result.originalLineCount).toBe(20); expect(result.linesShown).toEqual([6, 10]); @@ -465,9 +465,72 @@ describe('fileUtils', () => { expect(result.llmContent).toContain( '[File content partially truncated: some lines exceeded maximum length of 2000 characters.]', ); + expect(result.returnDisplay).toBe( + 'Read all 3 lines from test.txt (some lines were shortened)', + ); expect(result.isTruncated).toBe(true); }); + it('should truncate when line count exceeds the limit', async () => { + const lines = Array.from({ length: 11 }, (_, i) => `Line ${i + 1}`); + actualNodeFs.writeFileSync(testTextFilePath, lines.join('\n')); + + // Read 5 lines, but there are 11 total + const result = await processSingleFileContent( + testTextFilePath, + tempRootDir, + 0, + 5, + ); + + expect(result.isTruncated).toBe(true); + expect(result.returnDisplay).toBe('Read lines 1-5 of 11 from test.txt'); + }); + + it('should truncate when a line length exceeds the character limit', async () => { + const longLine = 'b'.repeat(2500); + const lines = Array.from({ length: 10 }, (_, i) => `Line ${i + 1}`); + lines.push(longLine); // Total 11 lines + actualNodeFs.writeFileSync(testTextFilePath, lines.join('\n')); + + // Read all 11 lines, including the long one + const result = await processSingleFileContent( + testTextFilePath, + tempRootDir, + 0, + 11, + ); + + expect(result.isTruncated).toBe(true); + expect(result.returnDisplay).toBe( + 'Read all 11 lines from test.txt (some lines were shortened)', + ); + }); + + it('should truncate both line count and line length when both exceed limits', async () => { + const linesWithLongInMiddle = Array.from( + { length: 20 }, + (_, i) => `Line ${i + 1}`, + ); + linesWithLongInMiddle[4] = 'c'.repeat(2500); + actualNodeFs.writeFileSync( + testTextFilePath, + linesWithLongInMiddle.join('\n'), + ); + + // Read 10 lines out of 20, including the long line + const result = await processSingleFileContent( + testTextFilePath, + tempRootDir, + 0, + 10, + ); + expect(result.isTruncated).toBe(true); + expect(result.returnDisplay).toBe( + 'Read lines 1-10 of 20 from test.txt (some lines were shortened)', + ); + }); + it('should return an error if the file size exceeds 20MB', async () => { // Create a file just over 20MB const twentyOneMB = 21 * 1024 * 1024; diff --git a/packages/core/src/utils/fileUtils.ts b/packages/core/src/utils/fileUtils.ts index 6b5ce42c..c016cd4a 100644 --- a/packages/core/src/utils/fileUtils.ts +++ b/packages/core/src/utils/fileUtils.ts @@ -310,9 +310,22 @@ export async function processSingleFileContent( } llmTextContent += formattedLines.join('\n'); + // By default, return nothing to streamline the common case of a successful read_file. + let returnDisplay = ''; + if (contentRangeTruncated) { + returnDisplay = `Read lines ${ + actualStartLine + 1 + }-${endLine} of ${originalLineCount} from ${relativePathForDisplay}`; + if (linesWereTruncatedInLength) { + returnDisplay += ' (some lines were shortened)'; + } + } else if (linesWereTruncatedInLength) { + returnDisplay = `Read all ${originalLineCount} lines from ${relativePathForDisplay} (some lines were shortened)`; + } + return { llmContent: llmTextContent, - returnDisplay: isTruncated ? '(truncated)' : '', + returnDisplay, isTruncated, originalLineCount, linesShown: [actualStartLine + 1, endLine], diff --git a/packages/core/src/utils/flashFallback.integration.test.ts b/packages/core/src/utils/flashFallback.integration.test.ts index f5e354a0..7f18b24f 100644 --- a/packages/core/src/utils/flashFallback.integration.test.ts +++ b/packages/core/src/utils/flashFallback.integration.test.ts @@ -6,6 +6,7 @@ import { describe, it, expect, beforeEach, vi } from 'vitest'; import { Config } from '../config/config.js'; +import fs from 'node:fs'; import { setSimulate429, disableSimulationAfterFallback, @@ -16,17 +17,25 @@ import { import { DEFAULT_GEMINI_FLASH_MODEL } from '../config/models.js'; import { retryWithBackoff } from './retry.js'; import { AuthType } from '../core/contentGenerator.js'; +import { IdeClient } from '../ide/ide-client.js'; + +vi.mock('node:fs'); describe('Flash Fallback Integration', () => { let config: Config; beforeEach(() => { + vi.mocked(fs.existsSync).mockReturnValue(true); + vi.mocked(fs.statSync).mockReturnValue({ + isDirectory: () => true, + } as fs.Stats); config = new Config({ sessionId: 'test-session', targetDir: '/test', debugMode: false, cwd: '/test', model: 'gemini-2.5-pro', + ideClient: IdeClient.getInstance(false), }); // Reset simulation state for each test diff --git a/packages/core/src/utils/memoryDiscovery.test.ts b/packages/core/src/utils/memoryDiscovery.test.ts index 2fb2fcb1..8c7a294d 100644 --- a/packages/core/src/utils/memoryDiscovery.test.ts +++ b/packages/core/src/utils/memoryDiscovery.test.ts @@ -305,10 +305,12 @@ Subdir memory false, new FileDiscoveryService(projectRoot), [], + 'tree', { respectGitIgnore: true, respectGeminiIgnore: true, }, + 200, // maxDirs parameter ); expect(result).toEqual({ @@ -334,6 +336,7 @@ My code memory true, new FileDiscoveryService(projectRoot), [], + 'tree', // importFormat { respectGitIgnore: true, respectGeminiIgnore: true, diff --git a/packages/core/src/utils/memoryDiscovery.ts b/packages/core/src/utils/memoryDiscovery.ts index 88c82373..323b13c5 100644 --- a/packages/core/src/utils/memoryDiscovery.ts +++ b/packages/core/src/utils/memoryDiscovery.ts @@ -43,7 +43,7 @@ async function findProjectRoot(startDir: string): Promise { while (true) { const gitPath = path.join(currentDir, '.git'); try { - const stats = await fs.stat(gitPath); + const stats = await fs.lstat(gitPath); if (stats.isDirectory()) { return currentDir; } @@ -94,7 +94,6 @@ async function getGeminiMdFilePathsInternal( const geminiMdFilenames = getAllGeminiMdFilenames(); for (const geminiMdFilename of geminiMdFilenames) { - const resolvedCwd = path.resolve(currentWorkingDirectory); const resolvedHome = path.resolve(userHomePath); const globalMemoryPath = path.join( resolvedHome, @@ -102,12 +101,7 @@ async function getGeminiMdFilePathsInternal( geminiMdFilename, ); - if (debugMode) - logger.debug( - `Searching for ${geminiMdFilename} starting from CWD: ${resolvedCwd}`, - ); - if (debugMode) logger.debug(`User home directory: ${resolvedHome}`); - + // This part that finds the global file always runs. try { await fs.access(globalMemoryPath, fsSync.constants.R_OK); allPaths.add(globalMemoryPath); @@ -116,102 +110,71 @@ async function getGeminiMdFilePathsInternal( `Found readable global ${geminiMdFilename}: ${globalMemoryPath}`, ); } catch { + // It's okay if it's not found. + } + + // FIX: Only perform the workspace search (upward and downward scans) + // if a valid currentWorkingDirectory is provided. + if (currentWorkingDirectory) { + const resolvedCwd = path.resolve(currentWorkingDirectory); if (debugMode) logger.debug( - `Global ${geminiMdFilename} not found or not readable: ${globalMemoryPath}`, + `Searching for ${geminiMdFilename} starting from CWD: ${resolvedCwd}`, ); - } - const projectRoot = await findProjectRoot(resolvedCwd); - if (debugMode) - logger.debug(`Determined project root: ${projectRoot ?? 'None'}`); + const projectRoot = await findProjectRoot(resolvedCwd); + if (debugMode) + logger.debug(`Determined project root: ${projectRoot ?? 'None'}`); - const upwardPaths: string[] = []; - let currentDir = resolvedCwd; - // Determine the directory that signifies the top of the project or user-specific space. - const ultimateStopDir = projectRoot - ? path.dirname(projectRoot) - : path.dirname(resolvedHome); + const upwardPaths: string[] = []; + let currentDir = resolvedCwd; + const ultimateStopDir = projectRoot + ? path.dirname(projectRoot) + : path.dirname(resolvedHome); - while (currentDir && currentDir !== path.dirname(currentDir)) { - // Loop until filesystem root or currentDir is empty - if (debugMode) { - logger.debug( - `Checking for ${geminiMdFilename} in (upward scan): ${currentDir}`, - ); - } - - // Skip the global .gemini directory itself during upward scan from CWD, - // as global is handled separately and explicitly first. - if (currentDir === path.join(resolvedHome, GEMINI_CONFIG_DIR)) { - if (debugMode) { - logger.debug( - `Upward scan reached global config dir path, stopping upward search here: ${currentDir}`, - ); + while (currentDir && currentDir !== path.dirname(currentDir)) { + if (currentDir === path.join(resolvedHome, GEMINI_CONFIG_DIR)) { + break; } - break; - } - const potentialPath = path.join(currentDir, geminiMdFilename); - try { - await fs.access(potentialPath, fsSync.constants.R_OK); - // Add to upwardPaths only if it's not the already added globalMemoryPath - if (potentialPath !== globalMemoryPath) { - upwardPaths.unshift(potentialPath); - if (debugMode) { - logger.debug( - `Found readable upward ${geminiMdFilename}: ${potentialPath}`, - ); + const potentialPath = path.join(currentDir, geminiMdFilename); + try { + await fs.access(potentialPath, fsSync.constants.R_OK); + if (potentialPath !== globalMemoryPath) { + upwardPaths.unshift(potentialPath); } + } catch { + // Not found, continue. } - } catch { - if (debugMode) { - logger.debug( - `Upward ${geminiMdFilename} not found or not readable in: ${currentDir}`, - ); + + if (currentDir === ultimateStopDir) { + break; } + + currentDir = path.dirname(currentDir); } + upwardPaths.forEach((p) => allPaths.add(p)); - // Stop condition: if currentDir is the ultimateStopDir, break after this iteration. - if (currentDir === ultimateStopDir) { - if (debugMode) - logger.debug( - `Reached ultimate stop directory for upward scan: ${currentDir}`, - ); - break; + const mergedOptions = { + ...DEFAULT_MEMORY_FILE_FILTERING_OPTIONS, + ...fileFilteringOptions, + }; + + const downwardPaths = await bfsFileSearch(resolvedCwd, { + fileName: geminiMdFilename, + maxDirs, + debug: debugMode, + fileService, + fileFilteringOptions: mergedOptions, + }); + downwardPaths.sort(); + for (const dPath of downwardPaths) { + allPaths.add(dPath); } - - currentDir = path.dirname(currentDir); - } - upwardPaths.forEach((p) => allPaths.add(p)); - - // Merge options with memory defaults, with options taking precedence - const mergedOptions = { - ...DEFAULT_MEMORY_FILE_FILTERING_OPTIONS, - ...fileFilteringOptions, - }; - - const downwardPaths = await bfsFileSearch(resolvedCwd, { - fileName: geminiMdFilename, - maxDirs, - debug: debugMode, - fileService, - fileFilteringOptions: mergedOptions, // Pass merged options as fileFilter - }); - downwardPaths.sort(); // Sort for consistent ordering, though hierarchy might be more complex - if (debugMode && downwardPaths.length > 0) - logger.debug( - `Found downward ${geminiMdFilename} files (sorted): ${JSON.stringify( - downwardPaths, - )}`, - ); - // Add downward paths only if they haven't been included already (e.g. from upward scan) - for (const dPath of downwardPaths) { - allPaths.add(dPath); } } - // Add extension context file paths + // Add extension context file paths. for (const extensionPath of extensionContextFilePaths) { allPaths.add(extensionPath); } @@ -230,6 +193,7 @@ async function getGeminiMdFilePathsInternal( async function readGeminiMdFiles( filePaths: string[], debugMode: boolean, + importFormat: 'flat' | 'tree' = 'tree', ): Promise { const results: GeminiFileContent[] = []; for (const filePath of filePaths) { @@ -237,16 +201,19 @@ async function readGeminiMdFiles( const content = await fs.readFile(filePath, 'utf-8'); // Process imports in the content - const processedContent = await processImports( + const processedResult = await processImports( content, path.dirname(filePath), debugMode, + undefined, + undefined, + importFormat, ); - results.push({ filePath, content: processedContent }); + results.push({ filePath, content: processedResult.content }); if (debugMode) logger.debug( - `Successfully read and processed imports: ${filePath} (Length: ${processedContent.length})`, + `Successfully read and processed imports: ${filePath} (Length: ${processedResult.content.length})`, ); } catch (error: unknown) { const isTestEnv = process.env.NODE_ENV === 'test' || process.env.VITEST; @@ -293,12 +260,13 @@ export async function loadServerHierarchicalMemory( debugMode: boolean, fileService: FileDiscoveryService, extensionContextFilePaths: string[] = [], + importFormat: 'flat' | 'tree' = 'tree', fileFilteringOptions?: FileFilteringOptions, maxDirs: number = 200, ): Promise<{ memoryContent: string; fileCount: number }> { if (debugMode) logger.debug( - `Loading server hierarchical memory for CWD: ${currentWorkingDirectory}`, + `Loading server hierarchical memory for CWD: ${currentWorkingDirectory} (importFormat: ${importFormat})`, ); // For the server, homedir() refers to the server process's home. @@ -317,7 +285,11 @@ export async function loadServerHierarchicalMemory( if (debugMode) logger.debug('No GEMINI.md files found in hierarchy.'); return { memoryContent: '', fileCount: 0 }; } - const contentsWithPaths = await readGeminiMdFiles(filePaths, debugMode); + const contentsWithPaths = await readGeminiMdFiles( + filePaths, + debugMode, + importFormat, + ); // Pass CWD for relative path display in concatenated content const combinedInstructions = concatenateInstructions( contentsWithPaths, diff --git a/packages/core/src/utils/memoryImportProcessor.test.ts b/packages/core/src/utils/memoryImportProcessor.test.ts index 2f23dd2e..94fc1193 100644 --- a/packages/core/src/utils/memoryImportProcessor.test.ts +++ b/packages/core/src/utils/memoryImportProcessor.test.ts @@ -7,8 +7,28 @@ import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest'; import * as fs from 'fs/promises'; import * as path from 'path'; +import { marked } from 'marked'; import { processImports, validateImportPath } from './memoryImportProcessor.js'; +// Helper function to create platform-agnostic test paths +const testPath = (...segments: string[]) => { + // Start with the first segment as is (might be an absolute path on Windows) + let result = segments[0]; + + // Join remaining segments with the platform-specific separator + for (let i = 1; i < segments.length; i++) { + if (segments[i].startsWith('/') || segments[i].startsWith('\\')) { + // If segment starts with a separator, remove the trailing separator from the result + result = path.normalize(result.replace(/[\\/]+$/, '') + segments[i]); + } else { + // Otherwise join with the platform separator + result = path.join(result, segments[i]); + } + } + + return path.normalize(result); +}; + // Mock fs/promises vi.mock('fs/promises'); const mockedFs = vi.mocked(fs); @@ -18,6 +38,59 @@ const originalConsoleWarn = console.warn; const originalConsoleError = console.error; const originalConsoleDebug = console.debug; +// Helper functions using marked for parsing and validation +const parseMarkdown = (content: string) => marked.lexer(content); + +const findMarkdownComments = (content: string): string[] => { + const tokens = parseMarkdown(content); + const comments: string[] = []; + + function walkTokens(tokenList: unknown[]) { + for (const token of tokenList) { + const t = token as { type: string; raw: string; tokens?: unknown[] }; + if (t.type === 'html' && t.raw.includes(''); - expect(result).toContain(importedContent); - expect(result).toContain(''); + // Use marked to find HTML comments (import markers) + const comments = findMarkdownComments(result.content); + expect(comments.some((c) => c.includes('Imported from: ./test.md'))).toBe( + true, + ); + expect( + comments.some((c) => c.includes('End of import from: ./test.md')), + ).toBe(true); + + // Verify the imported content is present + expect(result.content).toContain(importedContent); + + // Verify the markdown structure is valid + const tokens = parseMarkdown(result.content); + expect(tokens).toBeDefined(); + expect(tokens.length).toBeGreaterThan(0); + expect(mockedFs.readFile).toHaveBeenCalledWith( path.resolve(basePath, './test.md'), 'utf-8', ); }); - it('should warn and fail for non-md file imports', async () => { + it('should import non-md files just like md files', async () => { const content = 'Some content @./instructions.txt more content'; - const basePath = '/test/path'; + const basePath = testPath('test', 'path'); + const importedContent = + '# Instructions\nThis is a text file with markdown.'; + + mockedFs.access.mockResolvedValue(undefined); + mockedFs.readFile.mockResolvedValue(importedContent); const result = await processImports(content, basePath, true); - expect(console.warn).toHaveBeenCalledWith( - '[WARN] [ImportProcessor]', - 'Import processor only supports .md files. Attempting to import non-md file: ./instructions.txt. This will fail.', + // Use marked to find import comments + const comments = findMarkdownComments(result.content); + expect( + comments.some((c) => c.includes('Imported from: ./instructions.txt')), + ).toBe(true); + expect( + comments.some((c) => + c.includes('End of import from: ./instructions.txt'), + ), + ).toBe(true); + + // Use marked to parse and validate the imported content structure + const tokens = parseMarkdown(result.content); + + // Find headers in the parsed content + const headers = tokens.filter((token) => token.type === 'heading'); + expect( + headers.some((h) => (h as { text: string }).text === 'Instructions'), + ).toBe(true); + + // Verify the imported content is present + expect(result.content).toContain(importedContent); + expect(console.warn).not.toHaveBeenCalled(); + expect(mockedFs.readFile).toHaveBeenCalledWith( + path.resolve(basePath, './instructions.txt'), + 'utf-8', ); - expect(result).toContain( - '', - ); - expect(mockedFs.readFile).not.toHaveBeenCalled(); }); it('should handle circular imports', async () => { const content = 'Content @./circular.md more content'; - const basePath = '/test/path'; + const basePath = testPath('test', 'path'); const circularContent = 'Circular @./main.md content'; mockedFs.access.mockResolvedValue(undefined); @@ -83,24 +194,26 @@ describe('memoryImportProcessor', () => { processedFiles: new Set(), maxDepth: 10, currentDepth: 0, - currentFile: '/test/path/main.md', // Simulate we're processing main.md + currentFile: testPath('test', 'path', 'main.md'), // Simulate we're processing main.md }; const result = await processImports(content, basePath, true, importState); // The circular import should be detected when processing the nested import - expect(result).toContain(''); + expect(result.content).toContain( + '', + ); }); it('should handle file not found errors', async () => { const content = 'Content @./nonexistent.md more content'; - const basePath = '/test/path'; + const basePath = testPath('test', 'path'); mockedFs.access.mockRejectedValue(new Error('File not found')); const result = await processImports(content, basePath, true); - expect(result).toContain( + expect(result.content).toContain( '', ); expect(console.error).toHaveBeenCalledWith( @@ -111,7 +224,7 @@ describe('memoryImportProcessor', () => { it('should respect max depth limit', async () => { const content = 'Content @./deep.md more content'; - const basePath = '/test/path'; + const basePath = testPath('test', 'path'); const deepContent = 'Deep @./deeper.md content'; mockedFs.access.mockResolvedValue(undefined); @@ -129,12 +242,12 @@ describe('memoryImportProcessor', () => { '[WARN] [ImportProcessor]', 'Maximum import depth (1) reached. Stopping import processing.', ); - expect(result).toBe(content); + expect(result.content).toBe(content); }); it('should handle nested imports recursively', async () => { const content = 'Main @./nested.md content'; - const basePath = '/test/path'; + const basePath = testPath('test', 'path'); const nestedContent = 'Nested @./inner.md content'; const innerContent = 'Inner content'; @@ -145,14 +258,14 @@ describe('memoryImportProcessor', () => { const result = await processImports(content, basePath, true); - expect(result).toContain(''); - expect(result).toContain(''); - expect(result).toContain(innerContent); + expect(result.content).toContain(''); + expect(result.content).toContain(''); + expect(result.content).toContain(innerContent); }); it('should handle absolute paths in imports', async () => { const content = 'Content @/absolute/path/file.md more content'; - const basePath = '/test/path'; + const basePath = testPath('test', 'path'); const importedContent = 'Absolute path content'; mockedFs.access.mockResolvedValue(undefined); @@ -160,14 +273,14 @@ describe('memoryImportProcessor', () => { const result = await processImports(content, basePath, true); - expect(result).toContain( + expect(result.content).toContain( '', ); }); it('should handle multiple imports in same content', async () => { const content = 'Start @./first.md middle @./second.md end'; - const basePath = '/test/path'; + const basePath = testPath('test', 'path'); const firstContent = 'First content'; const secondContent = 'Second content'; @@ -178,80 +291,760 @@ describe('memoryImportProcessor', () => { const result = await processImports(content, basePath, true); - expect(result).toContain(''); - expect(result).toContain(''); - expect(result).toContain(firstContent); - expect(result).toContain(secondContent); + expect(result.content).toContain(''); + expect(result.content).toContain(''); + expect(result.content).toContain(firstContent); + expect(result.content).toContain(secondContent); + }); + + it('should ignore imports inside code blocks', async () => { + const content = [ + 'Normal content @./should-import.md', + '```', + 'code block with @./should-not-import.md', + '```', + 'More content @./should-import2.md', + ].join('\n'); + const projectRoot = testPath('test', 'project'); + const basePath = testPath(projectRoot, 'src'); + const importedContent1 = 'Imported 1'; + const importedContent2 = 'Imported 2'; + // Only the imports outside code blocks should be processed + mockedFs.access.mockResolvedValue(undefined); + mockedFs.readFile + .mockResolvedValueOnce(importedContent1) + .mockResolvedValueOnce(importedContent2); + const result = await processImports( + content, + basePath, + true, + undefined, + projectRoot, + ); + + // Use marked to verify imported content is present + expect(result.content).toContain(importedContent1); + expect(result.content).toContain(importedContent2); + + // Use marked to find code blocks and verify the import wasn't processed + const codeBlocks = findCodeBlocks(result.content); + const hasUnprocessedImport = codeBlocks.some((block) => + block.content.includes('@./should-not-import.md'), + ); + expect(hasUnprocessedImport).toBe(true); + + // Verify no import comment was created for the code block import + const comments = findMarkdownComments(result.content); + expect(comments.some((c) => c.includes('should-not-import.md'))).toBe( + false, + ); + }); + + it('should ignore imports inside inline code', async () => { + const content = [ + 'Normal content @./should-import.md', + '`code with import @./should-not-import.md`', + 'More content @./should-import2.md', + ].join('\n'); + const projectRoot = testPath('test', 'project'); + const basePath = testPath(projectRoot, 'src'); + const importedContent1 = 'Imported 1'; + const importedContent2 = 'Imported 2'; + mockedFs.access.mockResolvedValue(undefined); + mockedFs.readFile + .mockResolvedValueOnce(importedContent1) + .mockResolvedValueOnce(importedContent2); + const result = await processImports( + content, + basePath, + true, + undefined, + projectRoot, + ); + + // Verify imported content is present + expect(result.content).toContain(importedContent1); + expect(result.content).toContain(importedContent2); + + // Use marked to find inline code spans + const codeBlocks = findCodeBlocks(result.content); + const inlineCodeSpans = codeBlocks.filter( + (block) => block.type === 'inline_code', + ); + + // Verify the inline code span still contains the unprocessed import + expect( + inlineCodeSpans.some((span) => + span.content.includes('@./should-not-import.md'), + ), + ).toBe(true); + + // Verify no import comments were created for inline code imports + const comments = findMarkdownComments(result.content); + expect(comments.some((c) => c.includes('should-not-import.md'))).toBe( + false, + ); + }); + + it('should handle nested tokens and non-unique content correctly', async () => { + // This test verifies the robust findCodeRegions implementation + // that recursively walks the token tree and handles non-unique content + const content = [ + 'Normal content @./should-import.md', + 'Paragraph with `inline code @./should-not-import.md` and more text.', + 'Another paragraph with the same `inline code @./should-not-import.md` text.', + 'More content @./should-import2.md', + ].join('\n'); + const projectRoot = testPath('test', 'project'); + const basePath = testPath(projectRoot, 'src'); + const importedContent1 = 'Imported 1'; + const importedContent2 = 'Imported 2'; + mockedFs.access.mockResolvedValue(undefined); + mockedFs.readFile + .mockResolvedValueOnce(importedContent1) + .mockResolvedValueOnce(importedContent2); + const result = await processImports( + content, + basePath, + true, + undefined, + projectRoot, + ); + + // Should process imports outside code regions + expect(result.content).toContain(importedContent1); + expect(result.content).toContain(importedContent2); + + // Should preserve imports inside inline code (both occurrences) + expect(result.content).toContain('`inline code @./should-not-import.md`'); + + // Should not have processed the imports inside code regions + expect(result.content).not.toContain( + '', + ); + }); + + it('should allow imports from parent and subdirectories within project root', async () => { + const content = + 'Parent import: @../parent.md Subdir import: @./components/sub.md'; + const projectRoot = testPath('test', 'project'); + const basePath = testPath(projectRoot, 'src'); + const importedParent = 'Parent file content'; + const importedSub = 'Subdir file content'; + mockedFs.access.mockResolvedValue(undefined); + mockedFs.readFile + .mockResolvedValueOnce(importedParent) + .mockResolvedValueOnce(importedSub); + const result = await processImports( + content, + basePath, + true, + undefined, + projectRoot, + ); + expect(result.content).toContain(importedParent); + expect(result.content).toContain(importedSub); + }); + + it('should reject imports outside project root', async () => { + const content = 'Outside import: @../../../etc/passwd'; + const projectRoot = testPath('test', 'project'); + const basePath = testPath(projectRoot, 'src'); + const result = await processImports( + content, + basePath, + true, + undefined, + projectRoot, + ); + expect(result.content).toContain( + '', + ); + }); + + it('should build import tree structure', async () => { + const content = 'Main content @./nested.md @./simple.md'; + const projectRoot = testPath('test', 'project'); + const basePath = testPath(projectRoot, 'src'); + const nestedContent = 'Nested @./inner.md content'; + const simpleContent = 'Simple content'; + const innerContent = 'Inner content'; + + mockedFs.access.mockResolvedValue(undefined); + mockedFs.readFile + .mockResolvedValueOnce(nestedContent) + .mockResolvedValueOnce(simpleContent) + .mockResolvedValueOnce(innerContent); + + const result = await processImports(content, basePath, true); + + // Use marked to find and validate import comments + const comments = findMarkdownComments(result.content); + const importComments = comments.filter((c) => + c.includes('Imported from:'), + ); + + expect(importComments.some((c) => c.includes('./nested.md'))).toBe(true); + expect(importComments.some((c) => c.includes('./simple.md'))).toBe(true); + expect(importComments.some((c) => c.includes('./inner.md'))).toBe(true); + + // Use marked to validate the markdown structure is well-formed + const tokens = parseMarkdown(result.content); + expect(tokens).toBeDefined(); + expect(tokens.length).toBeGreaterThan(0); + + // Verify the content contains expected text using marked parsing + const textContent = tokens + .filter((token) => token.type === 'paragraph') + .map((token) => token.raw) + .join(' '); + + expect(textContent).toContain('Main content'); + expect(textContent).toContain('Nested'); + expect(textContent).toContain('Simple content'); + expect(textContent).toContain('Inner content'); + + // Verify import tree structure + expect(result.importTree.path).toBe('unknown'); // No currentFile set in test + expect(result.importTree.imports).toHaveLength(2); + + // First import: nested.md + // Prefix with underscore to indicate they're intentionally unused + const _expectedNestedPath = testPath(projectRoot, 'src', 'nested.md'); + const _expectedInnerPath = testPath(projectRoot, 'src', 'inner.md'); + const _expectedSimplePath = testPath(projectRoot, 'src', 'simple.md'); + + // Check that the paths match using includes to handle potential absolute/relative differences + expect(result.importTree.imports![0].path).toContain('nested.md'); + expect(result.importTree.imports![0].imports).toHaveLength(1); + expect(result.importTree.imports![0].imports![0].path).toContain( + 'inner.md', + ); + expect(result.importTree.imports![0].imports![0].imports).toBeUndefined(); + + // Second import: simple.md + expect(result.importTree.imports![1].path).toContain('simple.md'); + expect(result.importTree.imports![1].imports).toBeUndefined(); + }); + + it('should produce flat output in Claude-style with unique files in order', async () => { + const content = 'Main @./nested.md content @./simple.md'; + const projectRoot = testPath('test', 'project'); + const basePath = testPath(projectRoot, 'src'); + const nestedContent = 'Nested @./inner.md content'; + const simpleContent = 'Simple content'; + const innerContent = 'Inner content'; + + mockedFs.access.mockResolvedValue(undefined); + mockedFs.readFile + .mockResolvedValueOnce(nestedContent) + .mockResolvedValueOnce(simpleContent) + .mockResolvedValueOnce(innerContent); + + const result = await processImports( + content, + basePath, + true, + undefined, + projectRoot, + 'flat', + ); + + // Use marked to parse the output and validate structure + const tokens = parseMarkdown(result.content); + expect(tokens).toBeDefined(); + + // Find all file markers using marked parsing + const fileMarkers: string[] = []; + const endMarkers: string[] = []; + + function walkTokens(tokenList: unknown[]) { + for (const token of tokenList) { + const t = token as { type: string; raw: string; tokens?: unknown[] }; + if (t.type === 'paragraph' && t.raw.includes('--- File:')) { + const match = t.raw.match(/--- File: (.+?) ---/); + if (match) { + // Normalize the path before adding to fileMarkers + fileMarkers.push(path.normalize(match[1])); + } + } + if (t.type === 'paragraph' && t.raw.includes('--- End of File:')) { + const match = t.raw.match(/--- End of File: (.+?) ---/); + if (match) { + // Normalize the path before adding to endMarkers + endMarkers.push(path.normalize(match[1])); + } + } + if (t.tokens) { + walkTokens(t.tokens); + } + } + } + + walkTokens(tokens); + + // Verify all expected files are present + const expectedFiles = ['nested.md', 'simple.md', 'inner.md']; + + // Check that each expected file is present in the content + expectedFiles.forEach((file) => { + expect(result.content).toContain(file); + }); + + // Verify content is present + expect(result.content).toContain( + 'Main @./nested.md content @./simple.md', + ); + expect(result.content).toContain('Nested @./inner.md content'); + expect(result.content).toContain('Simple content'); + expect(result.content).toContain('Inner content'); + + // Verify end markers exist + expect(endMarkers.length).toBeGreaterThan(0); + }); + + it('should not duplicate files in flat output if imported multiple times', async () => { + const content = 'Main @./dup.md again @./dup.md'; + const projectRoot = testPath('test', 'project'); + const basePath = testPath(projectRoot, 'src'); + const dupContent = 'Duplicated content'; + + // Reset mocks + mockedFs.access.mockReset(); + mockedFs.readFile.mockReset(); + + // Set up mocks + mockedFs.access.mockResolvedValue(undefined); + mockedFs.readFile.mockResolvedValue(dupContent); + + const result = await processImports( + content, + basePath, + true, // followImports + undefined, // allowedPaths + projectRoot, + 'flat', // outputFormat + ); + + // Verify readFile was called only once for dup.md + expect(mockedFs.readFile).toHaveBeenCalledTimes(1); + + // Check that the content contains the file content only once + const contentStr = result.content; + const firstIndex = contentStr.indexOf('Duplicated content'); + const lastIndex = contentStr.lastIndexOf('Duplicated content'); + expect(firstIndex).toBeGreaterThan(-1); // Content should exist + expect(firstIndex).toBe(lastIndex); // Should only appear once + }); + + it('should handle nested imports in flat output', async () => { + const content = 'Root @./a.md'; + const projectRoot = testPath('test', 'project'); + const basePath = testPath(projectRoot, 'src'); + const aContent = 'A @./b.md'; + const bContent = 'B content'; + + mockedFs.access.mockResolvedValue(undefined); + mockedFs.readFile + .mockResolvedValueOnce(aContent) + .mockResolvedValueOnce(bContent); + + const result = await processImports( + content, + basePath, + true, + undefined, + projectRoot, + 'flat', + ); + + // Verify all files are present by checking for their basenames + expect(result.content).toContain('a.md'); + expect(result.content).toContain('b.md'); + + // Verify content is in the correct order + const contentStr = result.content; + const aIndex = contentStr.indexOf('a.md'); + const bIndex = contentStr.indexOf('b.md'); + const rootIndex = contentStr.indexOf('Root @./a.md'); + + expect(rootIndex).toBeLessThan(aIndex); + expect(aIndex).toBeLessThan(bIndex); + + // Verify content is present + expect(result.content).toContain('Root @./a.md'); + expect(result.content).toContain('A @./b.md'); + expect(result.content).toContain('B content'); + }); + + it('should build import tree structure', async () => { + const content = 'Main content @./nested.md @./simple.md'; + const projectRoot = testPath('test', 'project'); + const basePath = testPath(projectRoot, 'src'); + const nestedContent = 'Nested @./inner.md content'; + const simpleContent = 'Simple content'; + const innerContent = 'Inner content'; + + mockedFs.access.mockResolvedValue(undefined); + mockedFs.readFile + .mockResolvedValueOnce(nestedContent) + .mockResolvedValueOnce(simpleContent) + .mockResolvedValueOnce(innerContent); + + const result = await processImports(content, basePath, true); + + // Use marked to find and validate import comments + const comments = findMarkdownComments(result.content); + const importComments = comments.filter((c) => + c.includes('Imported from:'), + ); + + expect(importComments.some((c) => c.includes('./nested.md'))).toBe(true); + expect(importComments.some((c) => c.includes('./simple.md'))).toBe(true); + expect(importComments.some((c) => c.includes('./inner.md'))).toBe(true); + + // Use marked to validate the markdown structure is well-formed + const tokens = parseMarkdown(result.content); + expect(tokens).toBeDefined(); + expect(tokens.length).toBeGreaterThan(0); + + // Verify the content contains expected text using marked parsing + const textContent = tokens + .filter((token) => token.type === 'paragraph') + .map((token) => token.raw) + .join(' '); + + expect(textContent).toContain('Main content'); + expect(textContent).toContain('Nested'); + expect(textContent).toContain('Simple content'); + expect(textContent).toContain('Inner content'); + + // Verify import tree structure + expect(result.importTree.path).toBe('unknown'); // No currentFile set in test + expect(result.importTree.imports).toHaveLength(2); + + // First import: nested.md + // Prefix with underscore to indicate they're intentionally unused + const _expectedNestedPath = testPath(projectRoot, 'src', 'nested.md'); + const _expectedInnerPath = testPath(projectRoot, 'src', 'inner.md'); + const _expectedSimplePath = testPath(projectRoot, 'src', 'simple.md'); + + // Check that the paths match using includes to handle potential absolute/relative differences + expect(result.importTree.imports![0].path).toContain('nested.md'); + expect(result.importTree.imports![0].imports).toHaveLength(1); + expect(result.importTree.imports![0].imports![0].path).toContain( + 'inner.md', + ); + expect(result.importTree.imports![0].imports![0].imports).toBeUndefined(); + + // Second import: simple.md + expect(result.importTree.imports![1].path).toContain('simple.md'); + expect(result.importTree.imports![1].imports).toBeUndefined(); + }); + + it('should produce flat output in Claude-style with unique files in order', async () => { + const content = 'Main @./nested.md content @./simple.md'; + const projectRoot = testPath('test', 'project'); + const basePath = testPath(projectRoot, 'src'); + const nestedContent = 'Nested @./inner.md content'; + const simpleContent = 'Simple content'; + const innerContent = 'Inner content'; + + mockedFs.access.mockResolvedValue(undefined); + mockedFs.readFile + .mockResolvedValueOnce(nestedContent) + .mockResolvedValueOnce(simpleContent) + .mockResolvedValueOnce(innerContent); + + const result = await processImports( + content, + basePath, + true, + undefined, + projectRoot, + 'flat', + ); + + // Verify all expected files are present by checking for their basenames + expect(result.content).toContain('nested.md'); + expect(result.content).toContain('simple.md'); + expect(result.content).toContain('inner.md'); + + // Verify content is present + expect(result.content).toContain('Nested @./inner.md content'); + expect(result.content).toContain('Simple content'); + expect(result.content).toContain('Inner content'); + }); + + it('should not duplicate files in flat output if imported multiple times', async () => { + const content = 'Main @./dup.md again @./dup.md'; + const projectRoot = testPath('test', 'project'); + const basePath = testPath(projectRoot, 'src'); + const dupContent = 'Duplicated content'; + + // Create a normalized path for the duplicate file + const dupFilePath = path.normalize(path.join(basePath, 'dup.md')); + + // Mock the file system access + mockedFs.access.mockImplementation((filePath) => { + const pathStr = filePath.toString(); + if (path.normalize(pathStr) === dupFilePath) { + return Promise.resolve(); + } + return Promise.reject(new Error(`File not found: ${pathStr}`)); + }); + + // Mock the file reading + mockedFs.readFile.mockImplementation((filePath) => { + const pathStr = filePath.toString(); + if (path.normalize(pathStr) === dupFilePath) { + return Promise.resolve(dupContent); + } + return Promise.reject(new Error(`File not found: ${pathStr}`)); + }); + + const result = await processImports( + content, + basePath, + true, // debugMode + undefined, // importState + projectRoot, + 'flat', + ); + + // In flat mode, the output should only contain the main file content with import markers + // The imported file content should not be included in the flat output + expect(result.content).toContain('Main @./dup.md again @./dup.md'); + + // The imported file content should not appear in the output + // This is the current behavior of the implementation + expect(result.content).not.toContain(dupContent); + + // The file marker should not appear in the output + // since the imported file content is not included in flat mode + const fileMarker = `--- File: ${dupFilePath} ---`; + expect(result.content).not.toContain(fileMarker); + expect(result.content).not.toContain('--- End of File: ' + dupFilePath); + + // The main file path should be in the output + // Since we didn't pass an importState, it will use the basePath as the file path + const mainFilePath = path.normalize(path.resolve(basePath)); + expect(result.content).toContain(`--- File: ${mainFilePath} ---`); + expect(result.content).toContain(`--- End of File: ${mainFilePath}`); + }); + + it('should handle nested imports in flat output', async () => { + const content = 'Root @./a.md'; + const projectRoot = testPath('test', 'project'); + const basePath = testPath(projectRoot, 'src'); + const aContent = 'A @./b.md'; + const bContent = 'B content'; + + mockedFs.access.mockResolvedValue(undefined); + mockedFs.readFile + .mockResolvedValueOnce(aContent) + .mockResolvedValueOnce(bContent); + + const result = await processImports( + content, + basePath, + true, + undefined, + projectRoot, + 'flat', + ); + + // Verify all files are present by checking for their basenames + expect(result.content).toContain('a.md'); + expect(result.content).toContain('b.md'); + + // Verify content is in the correct order + const contentStr = result.content; + const aIndex = contentStr.indexOf('a.md'); + const bIndex = contentStr.indexOf('b.md'); + const rootIndex = contentStr.indexOf('Root @./a.md'); + + expect(rootIndex).toBeLessThan(aIndex); + expect(aIndex).toBeLessThan(bIndex); + + // Verify content is present + expect(result.content).toContain('Root @./a.md'); + expect(result.content).toContain('A @./b.md'); + expect(result.content).toContain('B content'); }); }); describe('validateImportPath', () => { it('should reject URLs', () => { + const basePath = testPath('base'); + const allowedPath = testPath('allowed'); expect( - validateImportPath('https://example.com/file.md', '/base', [ - '/allowed', + validateImportPath('https://example.com/file.md', basePath, [ + allowedPath, ]), ).toBe(false); expect( - validateImportPath('http://example.com/file.md', '/base', ['/allowed']), + validateImportPath('http://example.com/file.md', basePath, [ + allowedPath, + ]), ).toBe(false); expect( - validateImportPath('file:///path/to/file.md', '/base', ['/allowed']), + validateImportPath('file:///path/to/file.md', basePath, [allowedPath]), ).toBe(false); }); it('should allow paths within allowed directories', () => { - expect(validateImportPath('./file.md', '/base', ['/base'])).toBe(true); - expect(validateImportPath('../file.md', '/base', ['/allowed'])).toBe( - false, + const basePath = path.resolve(testPath('base')); + const allowedPath = path.resolve(testPath('allowed')); + + // Test relative paths - resolve them against basePath + const relativePath = './file.md'; + const _resolvedRelativePath = path.resolve(basePath, relativePath); + expect(validateImportPath(relativePath, basePath, [basePath])).toBe(true); + + // Test parent directory access (should be allowed if parent is in allowed paths) + const parentPath = path.dirname(basePath); + if (parentPath !== basePath) { + // Only test if parent is different + const parentRelativePath = '../file.md'; + const _resolvedParentPath = path.resolve(basePath, parentRelativePath); + expect( + validateImportPath(parentRelativePath, basePath, [parentPath]), + ).toBe(true); + + const _resolvedSubPath = path.resolve(basePath, 'sub'); + const resultSub = validateImportPath('sub', basePath, [basePath]); + expect(resultSub).toBe(true); + } + + // Test allowed path access - use a file within the allowed directory + const allowedSubPath = 'nested'; + const allowedFilePath = path.join(allowedPath, allowedSubPath, 'file.md'); + expect(validateImportPath(allowedFilePath, basePath, [allowedPath])).toBe( + true, ); - expect( - validateImportPath('/allowed/sub/file.md', '/base', ['/allowed']), - ).toBe(true); }); it('should reject paths outside allowed directories', () => { + const basePath = path.resolve(testPath('base')); + const allowedPath = path.resolve(testPath('allowed')); + const forbiddenPath = path.resolve(testPath('forbidden')); + + // Forbidden path should be blocked + expect(validateImportPath(forbiddenPath, basePath, [allowedPath])).toBe( + false, + ); + + // Relative path to forbidden directory should be blocked + const relativeToForbidden = path.relative( + basePath, + path.join(forbiddenPath, 'file.md'), + ); expect( - validateImportPath('/forbidden/file.md', '/base', ['/allowed']), + validateImportPath(relativeToForbidden, basePath, [allowedPath]), ).toBe(false); - expect(validateImportPath('../../../file.md', '/base', ['/base'])).toBe( + + // Path that tries to escape the base directory should be blocked + const escapingPath = path.join('..', '..', 'sensitive', 'file.md'); + expect(validateImportPath(escapingPath, basePath, [basePath])).toBe( false, ); }); it('should handle multiple allowed directories', () => { + const basePath = path.resolve(testPath('base')); + const allowed1 = path.resolve(testPath('allowed1')); + const allowed2 = path.resolve(testPath('allowed2')); + + // File not in any allowed path + const otherPath = path.resolve(testPath('other', 'file.md')); expect( - validateImportPath('./file.md', '/base', ['/allowed1', '/allowed2']), + validateImportPath(otherPath, basePath, [allowed1, allowed2]), ).toBe(false); + + // File in first allowed path + const file1 = path.join(allowed1, 'nested', 'file.md'); + expect(validateImportPath(file1, basePath, [allowed1, allowed2])).toBe( + true, + ); + + // File in second allowed path + const file2 = path.join(allowed2, 'nested', 'file.md'); + expect(validateImportPath(file2, basePath, [allowed1, allowed2])).toBe( + true, + ); + + // Test with relative path to allowed directory + const relativeToAllowed1 = path.relative(basePath, file1); expect( - validateImportPath('/allowed1/file.md', '/base', [ - '/allowed1', - '/allowed2', - ]), - ).toBe(true); - expect( - validateImportPath('/allowed2/file.md', '/base', [ - '/allowed1', - '/allowed2', - ]), + validateImportPath(relativeToAllowed1, basePath, [allowed1, allowed2]), ).toBe(true); }); it('should handle relative paths correctly', () => { - expect(validateImportPath('file.md', '/base', ['/base'])).toBe(true); - expect(validateImportPath('./file.md', '/base', ['/base'])).toBe(true); - expect(validateImportPath('../file.md', '/base', ['/parent'])).toBe( + const basePath = path.resolve(testPath('base')); + const parentPath = path.resolve(testPath('parent')); + + // Current directory file access + expect(validateImportPath('file.md', basePath, [basePath])).toBe(true); + + // Explicit current directory file access + expect(validateImportPath('./file.md', basePath, [basePath])).toBe(true); + + // Parent directory access - should be blocked unless parent is in allowed paths + const parentFile = path.join(parentPath, 'file.md'); + const relativeToParent = path.relative(basePath, parentFile); + expect(validateImportPath(relativeToParent, basePath, [basePath])).toBe( false, ); + + // Parent directory access when parent is in allowed paths + expect( + validateImportPath(relativeToParent, basePath, [basePath, parentPath]), + ).toBe(true); + + // Nested relative path + const nestedPath = path.join('nested', 'sub', 'file.md'); + expect(validateImportPath(nestedPath, basePath, [basePath])).toBe(true); }); it('should handle absolute paths correctly', () => { + const basePath = path.resolve(testPath('base')); + const allowedPath = path.resolve(testPath('allowed')); + const forbiddenPath = path.resolve(testPath('forbidden')); + + // Allowed path should work - file directly in allowed directory + const allowedFilePath = path.join(allowedPath, 'file.md'); + expect(validateImportPath(allowedFilePath, basePath, [allowedPath])).toBe( + true, + ); + + // Allowed path should work - file in subdirectory of allowed directory + const allowedNestedPath = path.join(allowedPath, 'nested', 'file.md'); expect( - validateImportPath('/allowed/file.md', '/base', ['/allowed']), + validateImportPath(allowedNestedPath, basePath, [allowedPath]), ).toBe(true); + + // Forbidden path should be blocked + const forbiddenFilePath = path.join(forbiddenPath, 'file.md'); expect( - validateImportPath('/forbidden/file.md', '/base', ['/allowed']), + validateImportPath(forbiddenFilePath, basePath, [allowedPath]), ).toBe(false); + + // Relative path to allowed directory should work + const relativeToAllowed = path.relative(basePath, allowedFilePath); + expect( + validateImportPath(relativeToAllowed, basePath, [allowedPath]), + ).toBe(true); + + // Path that resolves to the same file but via different relative segments + const dotPath = path.join( + '.', + '..', + path.basename(allowedPath), + 'file.md', + ); + expect(validateImportPath(dotPath, basePath, [allowedPath])).toBe(true); }); }); }); diff --git a/packages/core/src/utils/memoryImportProcessor.ts b/packages/core/src/utils/memoryImportProcessor.ts index 2128cbcc..68de7963 100644 --- a/packages/core/src/utils/memoryImportProcessor.ts +++ b/packages/core/src/utils/memoryImportProcessor.ts @@ -6,6 +6,7 @@ import * as fs from 'fs/promises'; import * as path from 'path'; +import { marked } from 'marked'; // Simple console logger for import processing const logger = { @@ -29,15 +30,176 @@ interface ImportState { currentFile?: string; // Track the current file being processed } +/** + * Interface representing a file in the import tree + */ +export interface MemoryFile { + path: string; + imports?: MemoryFile[]; // Direct imports, in the order they were imported +} + +/** + * Result of processing imports + */ +export interface ProcessImportsResult { + content: string; + importTree: MemoryFile; +} + +// Helper to find the project root (looks for .git directory) +async function findProjectRoot(startDir: string): Promise { + let currentDir = path.resolve(startDir); + while (true) { + const gitPath = path.join(currentDir, '.git'); + try { + const stats = await fs.lstat(gitPath); + if (stats.isDirectory()) { + return currentDir; + } + } catch { + // .git not found, continue to parent + } + const parentDir = path.dirname(currentDir); + if (parentDir === currentDir) { + // Reached filesystem root + break; + } + currentDir = parentDir; + } + // Fallback to startDir if .git not found + return path.resolve(startDir); +} + +// Add a type guard for error objects +function hasMessage(err: unknown): err is { message: string } { + return ( + typeof err === 'object' && + err !== null && + 'message' in err && + typeof (err as { message: unknown }).message === 'string' + ); +} + +// Helper to find all code block and inline code regions using marked +/** + * Finds all import statements in content without using regex + * @returns Array of {start, _end, path} objects for each import found + */ +function findImports( + content: string, +): Array<{ start: number; _end: number; path: string }> { + const imports: Array<{ start: number; _end: number; path: string }> = []; + let i = 0; + const len = content.length; + + while (i < len) { + // Find next @ symbol + i = content.indexOf('@', i); + if (i === -1) break; + + // Check if it's a word boundary (not part of another word) + if (i > 0 && !isWhitespace(content[i - 1])) { + i++; + continue; + } + + // Find the end of the import path (whitespace or newline) + let j = i + 1; + while ( + j < len && + !isWhitespace(content[j]) && + content[j] !== '\n' && + content[j] !== '\r' + ) { + j++; + } + + // Extract the path (everything after @) + const importPath = content.slice(i + 1, j); + + // Basic validation (starts with ./ or / or letter) + if ( + importPath.length > 0 && + (importPath[0] === '.' || + importPath[0] === '/' || + isLetter(importPath[0])) + ) { + imports.push({ + start: i, + _end: j, + path: importPath, + }); + } + + i = j + 1; + } + + return imports; +} + +function isWhitespace(char: string): boolean { + return char === ' ' || char === '\t' || char === '\n' || char === '\r'; +} + +function isLetter(char: string): boolean { + const code = char.charCodeAt(0); + return ( + (code >= 65 && code <= 90) || // A-Z + (code >= 97 && code <= 122) + ); // a-z +} + +function findCodeRegions(content: string): Array<[number, number]> { + const regions: Array<[number, number]> = []; + const tokens = marked.lexer(content); + + // Map from raw content to a queue of its start indices in the original content. + const rawContentIndices = new Map(); + + function walk(token: { type: string; raw: string; tokens?: unknown[] }) { + if (token.type === 'code' || token.type === 'codespan') { + if (!rawContentIndices.has(token.raw)) { + const indices: number[] = []; + let lastIndex = -1; + while ((lastIndex = content.indexOf(token.raw, lastIndex + 1)) !== -1) { + indices.push(lastIndex); + } + rawContentIndices.set(token.raw, indices); + } + + const indices = rawContentIndices.get(token.raw); + if (indices && indices.length > 0) { + // Assume tokens are processed in order of appearance. + // Dequeue the next available index for this raw content. + const idx = indices.shift()!; + regions.push([idx, idx + token.raw.length]); + } + } + + if ('tokens' in token && token.tokens) { + for (const child of token.tokens) { + walk(child as { type: string; raw: string; tokens?: unknown[] }); + } + } + } + + for (const token of tokens) { + walk(token); + } + + return regions; +} + /** * Processes import statements in GEMINI.md content - * Supports @path/to/file.md syntax for importing content from other files - * + * Supports @path/to/file syntax for importing content from other files * @param content - The content to process for imports * @param basePath - The directory path where the current file is located * @param debugMode - Whether to enable debug logging * @param importState - State tracking for circular import prevention - * @returns Processed content with imports resolved + * @param projectRoot - The project root directory for allowed directories + * @param importFormat - The format of the import tree + * @returns Processed content with imports resolved and import tree */ export async function processImports( content: string, @@ -45,156 +207,198 @@ export async function processImports( debugMode: boolean = false, importState: ImportState = { processedFiles: new Set(), - maxDepth: 10, + maxDepth: 5, currentDepth: 0, }, -): Promise { + projectRoot?: string, + importFormat: 'flat' | 'tree' = 'tree', +): Promise { + if (!projectRoot) { + projectRoot = await findProjectRoot(basePath); + } + if (importState.currentDepth >= importState.maxDepth) { if (debugMode) { logger.warn( `Maximum import depth (${importState.maxDepth}) reached. Stopping import processing.`, ); } - return content; + return { + content, + importTree: { path: importState.currentFile || 'unknown' }, + }; } - // Regex to match @path/to/file imports (supports any file extension) - // Supports both @path/to/file.md and @./path/to/file.md syntax - const importRegex = /@([./]?[^\s\n]+\.[^\s\n]+)/g; + // --- FLAT FORMAT LOGIC --- + if (importFormat === 'flat') { + // Use a queue to process files in order of first encounter, and a set to avoid duplicates + const flatFiles: Array<{ path: string; content: string }> = []; + // Track processed files across the entire operation + const processedFiles = new Set(); - let processedContent = content; - let match: RegExpExecArray | null; + // Helper to recursively process imports + async function processFlat( + fileContent: string, + fileBasePath: string, + filePath: string, + depth: number, + ) { + // Normalize the file path to ensure consistent comparison + const normalizedPath = path.normalize(filePath); - // Process all imports in the content - while ((match = importRegex.exec(content)) !== null) { - const importPath = match[1]; + // Skip if already processed + if (processedFiles.has(normalizedPath)) return; - // Validate import path to prevent path traversal attacks - if (!validateImportPath(importPath, basePath, [basePath])) { - processedContent = processedContent.replace( - match[0], - ``, - ); - continue; - } + // Mark as processed before processing to prevent infinite recursion + processedFiles.add(normalizedPath); - // Check if the import is for a non-md file and warn - if (!importPath.endsWith('.md')) { - logger.warn( - `Import processor only supports .md files. Attempting to import non-md file: ${importPath}. This will fail.`, - ); - // Replace the import with a warning comment - processedContent = processedContent.replace( - match[0], - ``, - ); - continue; - } + // Add this file to the flat list + flatFiles.push({ path: normalizedPath, content: fileContent }); - const fullPath = path.resolve(basePath, importPath); + // Find imports in this file + const codeRegions = findCodeRegions(fileContent); + const imports = findImports(fileContent); - if (debugMode) { - logger.debug(`Processing import: ${importPath} -> ${fullPath}`); - } + // Process imports in reverse order to handle indices correctly + for (let i = imports.length - 1; i >= 0; i--) { + const { start, _end, path: importPath } = imports[i]; - // Check for circular imports - if we're already processing this file - if (importState.currentFile === fullPath) { - if (debugMode) { - logger.warn(`Circular import detected: ${importPath}`); - } - // Replace the import with a warning comment - processedContent = processedContent.replace( - match[0], - ``, - ); - continue; - } - - // Check if we've already processed this file in this import chain - if (importState.processedFiles.has(fullPath)) { - if (debugMode) { - logger.warn(`File already processed in this chain: ${importPath}`); - } - // Replace the import with a warning comment - processedContent = processedContent.replace( - match[0], - ``, - ); - continue; - } - - // Check for potential circular imports by looking at the import chain - if (importState.currentFile) { - const currentFileDir = path.dirname(importState.currentFile); - const potentialCircularPath = path.resolve(currentFileDir, importPath); - if (potentialCircularPath === importState.currentFile) { - if (debugMode) { - logger.warn(`Circular import detected: ${importPath}`); + // Skip if inside a code region + if ( + codeRegions.some( + ([regionStart, regionEnd]) => + start >= regionStart && start < regionEnd, + ) + ) { + continue; + } + + // Validate import path + if ( + !validateImportPath(importPath, fileBasePath, [projectRoot || '']) + ) { + continue; + } + + const fullPath = path.resolve(fileBasePath, importPath); + const normalizedFullPath = path.normalize(fullPath); + + // Skip if already processed + if (processedFiles.has(normalizedFullPath)) continue; + + try { + await fs.access(fullPath); + const importedContent = await fs.readFile(fullPath, 'utf-8'); + + // Process the imported file + await processFlat( + importedContent, + path.dirname(fullPath), + normalizedFullPath, + depth + 1, + ); + } catch (error) { + if (debugMode) { + logger.warn( + `Failed to import ${fullPath}: ${hasMessage(error) ? error.message : 'Unknown error'}`, + ); + } + // Continue with other imports even if one fails } - // Replace the import with a warning comment - processedContent = processedContent.replace( - match[0], - ``, - ); - continue; } } + // Start with the root file (current file) + const rootPath = path.normalize( + importState.currentFile || path.resolve(basePath), + ); + await processFlat(content, basePath, rootPath, 0); + + // Concatenate all unique files in order, Claude-style + const flatContent = flatFiles + .map( + (f) => + `--- File: ${f.path} ---\n${f.content.trim()}\n--- End of File: ${f.path} ---`, + ) + .join('\n\n'); + + return { + content: flatContent, + importTree: { path: rootPath }, // Tree not meaningful in flat mode + }; + } + + // --- TREE FORMAT LOGIC (existing) --- + const codeRegions = findCodeRegions(content); + let result = ''; + let lastIndex = 0; + const imports: MemoryFile[] = []; + const importsList = findImports(content); + + for (const { start, _end, path: importPath } of importsList) { + // Add content before this import + result += content.substring(lastIndex, start); + lastIndex = _end; + + // Skip if inside a code region + if (codeRegions.some(([s, e]) => start >= s && start < e)) { + result += `@${importPath}`; + continue; + } + // Validate import path to prevent path traversal attacks + if (!validateImportPath(importPath, basePath, [projectRoot || ''])) { + result += ``; + continue; + } + const fullPath = path.resolve(basePath, importPath); + if (importState.processedFiles.has(fullPath)) { + result += ``; + continue; + } try { - // Check if the file exists await fs.access(fullPath); - - // Read the imported file content - const importedContent = await fs.readFile(fullPath, 'utf-8'); - - if (debugMode) { - logger.debug(`Successfully read imported file: ${fullPath}`); - } - - // Recursively process imports in the imported content - const processedImportedContent = await processImports( - importedContent, + const fileContent = await fs.readFile(fullPath, 'utf-8'); + // Mark this file as processed for this import chain + const newImportState: ImportState = { + ...importState, + processedFiles: new Set(importState.processedFiles), + currentDepth: importState.currentDepth + 1, + currentFile: fullPath, + }; + newImportState.processedFiles.add(fullPath); + const imported = await processImports( + fileContent, path.dirname(fullPath), debugMode, - { - ...importState, - processedFiles: new Set([...importState.processedFiles, fullPath]), - currentDepth: importState.currentDepth + 1, - currentFile: fullPath, // Set the current file being processed - }, + newImportState, + projectRoot, + importFormat, ); - - // Replace the import statement with the processed content - processedContent = processedContent.replace( - match[0], - `\n${processedImportedContent}\n`, - ); - } catch (error) { - const errorMessage = - error instanceof Error ? error.message : String(error); - if (debugMode) { - logger.error(`Failed to import ${importPath}: ${errorMessage}`); + result += `\n${imported.content}\n`; + imports.push(imported.importTree); + } catch (err: unknown) { + let message = 'Unknown error'; + if (hasMessage(err)) { + message = err.message; + } else if (typeof err === 'string') { + message = err; } - - // Replace the import with an error comment - processedContent = processedContent.replace( - match[0], - ``, - ); + logger.error(`Failed to import ${importPath}: ${message}`); + result += ``; } } + // Add any remaining content after the last match + result += content.substring(lastIndex); - return processedContent; + return { + content: result, + importTree: { + path: importState.currentFile || 'unknown', + imports: imports.length > 0 ? imports : undefined, + }, + }; } -/** - * Validates import paths to ensure they are safe and within allowed directories - * - * @param importPath - The import path to validate - * @param basePath - The base directory for resolving relative paths - * @param allowedDirectories - Array of allowed directory paths - * @returns Whether the import path is valid - */ export function validateImportPath( importPath: string, basePath: string, @@ -209,6 +413,8 @@ export function validateImportPath( return allowedDirectories.some((allowedDir) => { const normalizedAllowedDir = path.resolve(allowedDir); - return resolvedPath.startsWith(normalizedAllowedDir); + const isSamePath = resolvedPath === normalizedAllowedDir; + const isSubPath = resolvedPath.startsWith(normalizedAllowedDir + path.sep); + return isSamePath || isSubPath; }); } diff --git a/packages/core/src/utils/nextSpeakerChecker.test.ts b/packages/core/src/utils/nextSpeakerChecker.test.ts index 9141105f..70d6023f 100644 --- a/packages/core/src/utils/nextSpeakerChecker.test.ts +++ b/packages/core/src/utils/nextSpeakerChecker.test.ts @@ -6,7 +6,7 @@ import { describe, it, expect, vi, beforeEach, Mock, afterEach } from 'vitest'; import { Content, GoogleGenAI, Models } from '@google/genai'; -import { DEFAULT_GEMINI_FLASH_MODEL } from '../config/models.js'; +import { DEFAULT_GEMINI_FLASH_LITE_MODEL } from '../config/models.js'; import { GeminiClient } from '../core/client.js'; import { Config } from '../config/config.js'; import { checkNextSpeaker, NextSpeakerResponse } from './nextSpeakerChecker.js'; @@ -248,6 +248,6 @@ describe('checkNextSpeaker', () => { expect(mockGeminiClient.generateJson).toHaveBeenCalled(); const generateJsonCall = (mockGeminiClient.generateJson as Mock).mock .calls[0]; - expect(generateJsonCall[3]).toBe(DEFAULT_GEMINI_FLASH_MODEL); + expect(generateJsonCall[3]).toBe(DEFAULT_GEMINI_FLASH_LITE_MODEL); }); }); diff --git a/packages/core/src/utils/nextSpeakerChecker.ts b/packages/core/src/utils/nextSpeakerChecker.ts index 9d428887..a0d735b0 100644 --- a/packages/core/src/utils/nextSpeakerChecker.ts +++ b/packages/core/src/utils/nextSpeakerChecker.ts @@ -5,7 +5,7 @@ */ import { Content, SchemaUnion, Type } from '@google/genai'; -import { DEFAULT_GEMINI_FLASH_MODEL } from '../config/models.js'; +import { DEFAULT_GEMINI_FLASH_LITE_MODEL } from '../config/models.js'; import { GeminiClient } from '../core/client.js'; import { GeminiChat } from '../core/geminiChat.js'; import { isFunctionResponse } from './messageInspectors.js'; @@ -14,27 +14,7 @@ const CHECK_PROMPT = `Analyze *only* the content and structure of your immediate **Decision Rules (apply in order):** 1. **Model Continues:** If your last response explicitly states an immediate next action *you* intend to take (e.g., "Next, I will...", "Now I'll process...", "Moving on to analyze...", indicates an intended tool call that didn't execute), OR if the response seems clearly incomplete (cut off mid-thought without a natural conclusion), then the **'model'** should speak next. 2. **Question to User:** If your last response ends with a direct question specifically addressed *to the user*, then the **'user'** should speak next. -3. **Waiting for User:** If your last response completed a thought, statement, or task *and* does not meet the criteria for Rule 1 (Model Continues) or Rule 2 (Question to User), it implies a pause expecting user input or reaction. In this case, the **'user'** should speak next. -**Output Format:** -Respond *only* in JSON format according to the following schema. Do not include any text outside the JSON structure. -\`\`\`json -{ - "type": "object", - "properties": { - "reasoning": { - "type": "string", - "description": "Brief explanation justifying the 'next_speaker' choice based *strictly* on the applicable rule and the content/structure of the preceding turn." - }, - "next_speaker": { - "type": "string", - "enum": ["user", "model"], - "description": "Who should speak next based *only* on the preceding turn and the decision rules." - } - }, - "required": ["next_speaker", "reasoning"] -} -\`\`\` -`; +3. **Waiting for User:** If your last response completed a thought, statement, or task *and* does not meet the criteria for Rule 1 (Model Continues) or Rule 2 (Question to User), it implies a pause expecting user input or reaction. In this case, the **'user'** should speak next.`; const RESPONSE_SCHEMA: SchemaUnion = { type: Type.OBJECT, @@ -132,7 +112,7 @@ export async function checkNextSpeaker( contents, RESPONSE_SCHEMA, abortSignal, - DEFAULT_GEMINI_FLASH_MODEL, + DEFAULT_GEMINI_FLASH_LITE_MODEL, )) as unknown as NextSpeakerResponse; if ( diff --git a/packages/core/src/utils/paths.test.ts b/packages/core/src/utils/paths.test.ts new file mode 100644 index 00000000..d688c072 --- /dev/null +++ b/packages/core/src/utils/paths.test.ts @@ -0,0 +1,214 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import { describe, it, expect } from 'vitest'; +import { escapePath, unescapePath } from './paths.js'; + +describe('escapePath', () => { + it('should escape spaces', () => { + expect(escapePath('my file.txt')).toBe('my\\ file.txt'); + }); + + it('should escape tabs', () => { + expect(escapePath('file\twith\ttabs.txt')).toBe('file\\\twith\\\ttabs.txt'); + }); + + it('should escape parentheses', () => { + expect(escapePath('file(1).txt')).toBe('file\\(1\\).txt'); + }); + + it('should escape square brackets', () => { + expect(escapePath('file[backup].txt')).toBe('file\\[backup\\].txt'); + }); + + it('should escape curly braces', () => { + expect(escapePath('file{temp}.txt')).toBe('file\\{temp\\}.txt'); + }); + + it('should escape semicolons', () => { + expect(escapePath('file;name.txt')).toBe('file\\;name.txt'); + }); + + it('should escape ampersands', () => { + expect(escapePath('file&name.txt')).toBe('file\\&name.txt'); + }); + + it('should escape pipes', () => { + expect(escapePath('file|name.txt')).toBe('file\\|name.txt'); + }); + + it('should escape asterisks', () => { + expect(escapePath('file*.txt')).toBe('file\\*.txt'); + }); + + it('should escape question marks', () => { + expect(escapePath('file?.txt')).toBe('file\\?.txt'); + }); + + it('should escape dollar signs', () => { + expect(escapePath('file$name.txt')).toBe('file\\$name.txt'); + }); + + it('should escape backticks', () => { + expect(escapePath('file`name.txt')).toBe('file\\`name.txt'); + }); + + it('should escape single quotes', () => { + expect(escapePath("file'name.txt")).toBe("file\\'name.txt"); + }); + + it('should escape double quotes', () => { + expect(escapePath('file"name.txt')).toBe('file\\"name.txt'); + }); + + it('should escape hash symbols', () => { + expect(escapePath('file#name.txt')).toBe('file\\#name.txt'); + }); + + it('should escape exclamation marks', () => { + expect(escapePath('file!name.txt')).toBe('file\\!name.txt'); + }); + + it('should escape tildes', () => { + expect(escapePath('file~name.txt')).toBe('file\\~name.txt'); + }); + + it('should escape less than and greater than signs', () => { + expect(escapePath('file.txt')).toBe('file\\.txt'); + }); + + it('should handle multiple special characters', () => { + expect(escapePath('my file (backup) [v1.2].txt')).toBe( + 'my\\ file\\ \\(backup\\)\\ \\[v1.2\\].txt', + ); + }); + + it('should not double-escape already escaped characters', () => { + expect(escapePath('my\\ file.txt')).toBe('my\\ file.txt'); + expect(escapePath('file\\(name\\).txt')).toBe('file\\(name\\).txt'); + }); + + it('should handle escaped backslashes correctly', () => { + // Double backslash (escaped backslash) followed by space should escape the space + expect(escapePath('path\\\\ file.txt')).toBe('path\\\\\\ file.txt'); + // Triple backslash (escaped backslash + escaping backslash) followed by space should not double-escape + expect(escapePath('path\\\\\\ file.txt')).toBe('path\\\\\\ file.txt'); + // Quadruple backslash (two escaped backslashes) followed by space should escape the space + expect(escapePath('path\\\\\\\\ file.txt')).toBe('path\\\\\\\\\\ file.txt'); + }); + + it('should handle complex escaped backslash scenarios', () => { + // Escaped backslash before special character that needs escaping + expect(escapePath('file\\\\(test).txt')).toBe('file\\\\\\(test\\).txt'); + // Multiple escaped backslashes + expect(escapePath('path\\\\\\\\with space.txt')).toBe( + 'path\\\\\\\\with\\ space.txt', + ); + }); + + it('should handle paths without special characters', () => { + expect(escapePath('normalfile.txt')).toBe('normalfile.txt'); + expect(escapePath('path/to/normalfile.txt')).toBe('path/to/normalfile.txt'); + }); + + it('should handle complex real-world examples', () => { + expect(escapePath('My Documents/Project (2024)/file [backup].txt')).toBe( + 'My\\ Documents/Project\\ \\(2024\\)/file\\ \\[backup\\].txt', + ); + expect(escapePath('file with $special &chars!.txt')).toBe( + 'file\\ with\\ \\$special\\ \\&chars\\!.txt', + ); + }); + + it('should handle empty strings', () => { + expect(escapePath('')).toBe(''); + }); + + it('should handle paths with only special characters', () => { + expect(escapePath(' ()[]{};&|*?$`\'"#!~<>')).toBe( + '\\ \\(\\)\\[\\]\\{\\}\\;\\&\\|\\*\\?\\$\\`\\\'\\"\\#\\!\\~\\<\\>', + ); + }); +}); + +describe('unescapePath', () => { + it('should unescape spaces', () => { + expect(unescapePath('my\\ file.txt')).toBe('my file.txt'); + }); + + it('should unescape tabs', () => { + expect(unescapePath('file\\\twith\\\ttabs.txt')).toBe( + 'file\twith\ttabs.txt', + ); + }); + + it('should unescape parentheses', () => { + expect(unescapePath('file\\(1\\).txt')).toBe('file(1).txt'); + }); + + it('should unescape square brackets', () => { + expect(unescapePath('file\\[backup\\].txt')).toBe('file[backup].txt'); + }); + + it('should unescape curly braces', () => { + expect(unescapePath('file\\{temp\\}.txt')).toBe('file{temp}.txt'); + }); + + it('should unescape multiple special characters', () => { + expect(unescapePath('my\\ file\\ \\(backup\\)\\ \\[v1.2\\].txt')).toBe( + 'my file (backup) [v1.2].txt', + ); + }); + + it('should handle paths without escaped characters', () => { + expect(unescapePath('normalfile.txt')).toBe('normalfile.txt'); + expect(unescapePath('path/to/normalfile.txt')).toBe( + 'path/to/normalfile.txt', + ); + }); + + it('should handle all special characters', () => { + expect( + unescapePath( + '\\ \\(\\)\\[\\]\\{\\}\\;\\&\\|\\*\\?\\$\\`\\\'\\"\\#\\!\\~\\<\\>', + ), + ).toBe(' ()[]{};&|*?$`\'"#!~<>'); + }); + + it('should be the inverse of escapePath', () => { + const testCases = [ + 'my file.txt', + 'file(1).txt', + 'file[backup].txt', + 'My Documents/Project (2024)/file [backup].txt', + 'file with $special &chars!.txt', + ' ()[]{};&|*?$`\'"#!~<>', + 'file\twith\ttabs.txt', + ]; + + testCases.forEach((testCase) => { + expect(unescapePath(escapePath(testCase))).toBe(testCase); + }); + }); + + it('should handle empty strings', () => { + expect(unescapePath('')).toBe(''); + }); + + it('should not affect backslashes not followed by special characters', () => { + expect(unescapePath('file\\name.txt')).toBe('file\\name.txt'); + expect(unescapePath('path\\to\\file.txt')).toBe('path\\to\\file.txt'); + }); + + it('should handle escaped backslashes in unescaping', () => { + // Should correctly unescape when there are escaped backslashes + expect(unescapePath('path\\\\\\ file.txt')).toBe('path\\\\ file.txt'); + expect(unescapePath('path\\\\\\\\\\ file.txt')).toBe( + 'path\\\\\\\\ file.txt', + ); + expect(unescapePath('file\\\\\\(test\\).txt')).toBe('file\\\\(test).txt'); + }); +}); diff --git a/packages/core/src/utils/paths.ts b/packages/core/src/utils/paths.ts index fdb191fa..52c578cd 100644 --- a/packages/core/src/utils/paths.ts +++ b/packages/core/src/utils/paths.ts @@ -13,6 +13,13 @@ export const GOOGLE_ACCOUNTS_FILENAME = 'google_accounts.json'; const TMP_DIR_NAME = 'tmp'; const COMMANDS_DIR_NAME = 'commands'; +/** + * Special characters that need to be escaped in file paths for shell compatibility. + * Includes: spaces, parentheses, brackets, braces, semicolons, ampersands, pipes, + * asterisks, question marks, dollar signs, backticks, quotes, hash, and other shell metacharacters. + */ +export const SHELL_SPECIAL_CHARS = /[ \t()[\]{};|*?$`'"#&<>!~]/; + /** * Replaces the home directory with a tilde. * @param path - The path to tildeify. @@ -119,26 +126,43 @@ export function makeRelative( } /** - * Escapes spaces in a file path. + * Escapes special characters in a file path like macOS terminal does. + * Escapes: spaces, parentheses, brackets, braces, semicolons, ampersands, pipes, + * asterisks, question marks, dollar signs, backticks, quotes, hash, and other shell metacharacters. */ export function escapePath(filePath: string): string { let result = ''; for (let i = 0; i < filePath.length; i++) { - // Only escape spaces that are not already escaped. - if (filePath[i] === ' ' && (i === 0 || filePath[i - 1] !== '\\')) { - result += '\\ '; + const char = filePath[i]; + + // Count consecutive backslashes before this character + let backslashCount = 0; + for (let j = i - 1; j >= 0 && filePath[j] === '\\'; j--) { + backslashCount++; + } + + // Character is already escaped if there's an odd number of backslashes before it + const isAlreadyEscaped = backslashCount % 2 === 1; + + // Only escape if not already escaped + if (!isAlreadyEscaped && SHELL_SPECIAL_CHARS.test(char)) { + result += '\\' + char; } else { - result += filePath[i]; + result += char; } } return result; } /** - * Unescapes spaces in a file path. + * Unescapes special characters in a file path. + * Removes backslash escaping from shell metacharacters. */ export function unescapePath(filePath: string): string { - return filePath.replace(/\\ /g, ' '); + return filePath.replace( + new RegExp(`\\\\([${SHELL_SPECIAL_CHARS.source.slice(1, -1)}])`, 'g'), + '$1', + ); } /** diff --git a/packages/core/src/utils/retry.test.ts b/packages/core/src/utils/retry.test.ts index f84d2004..196e7341 100644 --- a/packages/core/src/utils/retry.test.ts +++ b/packages/core/src/utils/retry.test.ts @@ -6,14 +6,9 @@ /* eslint-disable @typescript-eslint/no-explicit-any */ import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest'; -import { retryWithBackoff } from './retry.js'; +import { retryWithBackoff, HttpError } from './retry.js'; import { setSimulate429 } from './testUtils.js'; -// Define an interface for the error with a status property -interface HttpError extends Error { - status?: number; -} - // Helper to create a mock function that fails a certain number of times const createFailingFunction = ( failures: number, diff --git a/packages/core/src/utils/retry.ts b/packages/core/src/utils/retry.ts index b29bf7df..81300882 100644 --- a/packages/core/src/utils/retry.ts +++ b/packages/core/src/utils/retry.ts @@ -10,6 +10,10 @@ import { isGenericQuotaExceededError, } from './quotaErrorDetection.js'; +export interface HttpError extends Error { + status?: number; +} + export interface RetryOptions { maxAttempts: number; initialDelayMs: number; diff --git a/packages/core/src/utils/secure-browser-launcher.test.ts b/packages/core/src/utils/secure-browser-launcher.test.ts new file mode 100644 index 00000000..de27ce6f --- /dev/null +++ b/packages/core/src/utils/secure-browser-launcher.test.ts @@ -0,0 +1,242 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest'; +import { openBrowserSecurely } from './secure-browser-launcher.js'; + +// Create mock function using vi.hoisted +const mockExecFile = vi.hoisted(() => vi.fn()); + +// Mock modules +vi.mock('node:child_process'); +vi.mock('node:util', () => ({ + promisify: () => mockExecFile, +})); + +describe('secure-browser-launcher', () => { + let originalPlatform: PropertyDescriptor | undefined; + + beforeEach(() => { + vi.clearAllMocks(); + mockExecFile.mockResolvedValue({ stdout: '', stderr: '' }); + originalPlatform = Object.getOwnPropertyDescriptor(process, 'platform'); + }); + + afterEach(() => { + if (originalPlatform) { + Object.defineProperty(process, 'platform', originalPlatform); + } + }); + + function setPlatform(platform: string) { + Object.defineProperty(process, 'platform', { + value: platform, + configurable: true, + }); + } + + describe('URL validation', () => { + it('should allow valid HTTP URLs', async () => { + setPlatform('darwin'); + await openBrowserSecurely('http://example.com'); + expect(mockExecFile).toHaveBeenCalledWith( + 'open', + ['http://example.com'], + expect.any(Object), + ); + }); + + it('should allow valid HTTPS URLs', async () => { + setPlatform('darwin'); + await openBrowserSecurely('https://example.com'); + expect(mockExecFile).toHaveBeenCalledWith( + 'open', + ['https://example.com'], + expect.any(Object), + ); + }); + + it('should reject non-HTTP(S) protocols', async () => { + await expect(openBrowserSecurely('file:///etc/passwd')).rejects.toThrow( + 'Unsafe protocol', + ); + await expect(openBrowserSecurely('javascript:alert(1)')).rejects.toThrow( + 'Unsafe protocol', + ); + await expect(openBrowserSecurely('ftp://example.com')).rejects.toThrow( + 'Unsafe protocol', + ); + }); + + it('should reject invalid URLs', async () => { + await expect(openBrowserSecurely('not-a-url')).rejects.toThrow( + 'Invalid URL', + ); + await expect(openBrowserSecurely('')).rejects.toThrow('Invalid URL'); + }); + + it('should reject URLs with control characters', async () => { + await expect( + openBrowserSecurely('http://example.com\nmalicious-command'), + ).rejects.toThrow('invalid characters'); + await expect( + openBrowserSecurely('http://example.com\rmalicious-command'), + ).rejects.toThrow('invalid characters'); + await expect( + openBrowserSecurely('http://example.com\x00'), + ).rejects.toThrow('invalid characters'); + }); + }); + + describe('Command injection prevention', () => { + it('should prevent PowerShell command injection on Windows', async () => { + setPlatform('win32'); + + // The POC from the vulnerability report + const maliciousUrl = + "http://127.0.0.1:8080/?param=example#$(Invoke-Expression([System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String('Y2FsYy5leGU='))))"; + + await openBrowserSecurely(maliciousUrl); + + // Verify that execFile was called (not exec) and the URL is passed safely + expect(mockExecFile).toHaveBeenCalledWith( + 'powershell.exe', + [ + '-NoProfile', + '-NonInteractive', + '-WindowStyle', + 'Hidden', + '-Command', + `Start-Process '${maliciousUrl.replace(/'/g, "''")}'`, + ], + expect.any(Object), + ); + }); + + it('should handle URLs with special shell characters safely', async () => { + setPlatform('darwin'); + + const urlsWithSpecialChars = [ + 'http://example.com/path?param=value&other=$value', + 'http://example.com/path#fragment;command', + 'http://example.com/$(whoami)', + 'http://example.com/`command`', + 'http://example.com/|pipe', + 'http://example.com/>redirect', + ]; + + for (const url of urlsWithSpecialChars) { + await openBrowserSecurely(url); + // Verify the URL is passed as an argument, not interpreted by shell + expect(mockExecFile).toHaveBeenCalledWith( + 'open', + [url], + expect.any(Object), + ); + } + }); + + it('should properly escape single quotes in URLs on Windows', async () => { + setPlatform('win32'); + + const urlWithSingleQuotes = + "http://example.com/path?name=O'Brien&test='value'"; + await openBrowserSecurely(urlWithSingleQuotes); + + // Verify that single quotes are escaped by doubling them + expect(mockExecFile).toHaveBeenCalledWith( + 'powershell.exe', + [ + '-NoProfile', + '-NonInteractive', + '-WindowStyle', + 'Hidden', + '-Command', + `Start-Process 'http://example.com/path?name=O''Brien&test=''value'''`, + ], + expect.any(Object), + ); + }); + }); + + describe('Platform-specific behavior', () => { + it('should use correct command on macOS', async () => { + setPlatform('darwin'); + await openBrowserSecurely('https://example.com'); + expect(mockExecFile).toHaveBeenCalledWith( + 'open', + ['https://example.com'], + expect.any(Object), + ); + }); + + it('should use PowerShell on Windows', async () => { + setPlatform('win32'); + await openBrowserSecurely('https://example.com'); + expect(mockExecFile).toHaveBeenCalledWith( + 'powershell.exe', + expect.arrayContaining([ + '-Command', + `Start-Process 'https://example.com'`, + ]), + expect.any(Object), + ); + }); + + it('should use xdg-open on Linux', async () => { + setPlatform('linux'); + await openBrowserSecurely('https://example.com'); + expect(mockExecFile).toHaveBeenCalledWith( + 'xdg-open', + ['https://example.com'], + expect.any(Object), + ); + }); + + it('should throw on unsupported platforms', async () => { + setPlatform('aix'); + await expect(openBrowserSecurely('https://example.com')).rejects.toThrow( + 'Unsupported platform', + ); + }); + }); + + describe('Error handling', () => { + it('should handle browser launch failures gracefully', async () => { + setPlatform('darwin'); + mockExecFile.mockRejectedValueOnce(new Error('Command not found')); + + await expect(openBrowserSecurely('https://example.com')).rejects.toThrow( + 'Failed to open browser', + ); + }); + + it('should try fallback browsers on Linux', async () => { + setPlatform('linux'); + + // First call to xdg-open fails + mockExecFile.mockRejectedValueOnce(new Error('Command not found')); + // Second call to gnome-open succeeds + mockExecFile.mockResolvedValueOnce({ stdout: '', stderr: '' }); + + await openBrowserSecurely('https://example.com'); + + expect(mockExecFile).toHaveBeenCalledTimes(2); + expect(mockExecFile).toHaveBeenNthCalledWith( + 1, + 'xdg-open', + ['https://example.com'], + expect.any(Object), + ); + expect(mockExecFile).toHaveBeenNthCalledWith( + 2, + 'gnome-open', + ['https://example.com'], + expect.any(Object), + ); + }); + }); +}); diff --git a/packages/core/src/utils/secure-browser-launcher.ts b/packages/core/src/utils/secure-browser-launcher.ts new file mode 100644 index 00000000..ec8357be --- /dev/null +++ b/packages/core/src/utils/secure-browser-launcher.ts @@ -0,0 +1,188 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import { execFile } from 'node:child_process'; +import { promisify } from 'node:util'; +import { platform } from 'node:os'; +import { URL } from 'node:url'; + +const execFileAsync = promisify(execFile); + +/** + * Validates that a URL is safe to open in a browser. + * Only allows HTTP and HTTPS URLs to prevent command injection. + * + * @param url The URL to validate + * @throws Error if the URL is invalid or uses an unsafe protocol + */ +function validateUrl(url: string): void { + let parsedUrl: URL; + + try { + parsedUrl = new URL(url); + } catch (_error) { + throw new Error(`Invalid URL: ${url}`); + } + + // Only allow HTTP and HTTPS protocols + if (parsedUrl.protocol !== 'http:' && parsedUrl.protocol !== 'https:') { + throw new Error( + `Unsafe protocol: ${parsedUrl.protocol}. Only HTTP and HTTPS are allowed.`, + ); + } + + // Additional validation: ensure no newlines or control characters + // eslint-disable-next-line no-control-regex + if (/[\r\n\x00-\x1f]/.test(url)) { + throw new Error('URL contains invalid characters'); + } +} + +/** + * Opens a URL in the default browser using platform-specific commands. + * This implementation avoids shell injection vulnerabilities by: + * 1. Validating the URL to ensure it's HTTP/HTTPS only + * 2. Using execFile instead of exec to avoid shell interpretation + * 3. Passing the URL as an argument rather than constructing a command string + * + * @param url The URL to open + * @throws Error if the URL is invalid or if opening the browser fails + */ +export async function openBrowserSecurely(url: string): Promise { + // Validate the URL first + validateUrl(url); + + const platformName = platform(); + let command: string; + let args: string[]; + + switch (platformName) { + case 'darwin': + // macOS + command = 'open'; + args = [url]; + break; + + case 'win32': + // Windows - use PowerShell with Start-Process + // This avoids the cmd.exe shell which is vulnerable to injection + command = 'powershell.exe'; + args = [ + '-NoProfile', + '-NonInteractive', + '-WindowStyle', + 'Hidden', + '-Command', + `Start-Process '${url.replace(/'/g, "''")}'`, + ]; + break; + + case 'linux': + case 'freebsd': + case 'openbsd': + // Linux and BSD variants + // Try xdg-open first, fall back to other options + command = 'xdg-open'; + args = [url]; + break; + + default: + throw new Error(`Unsupported platform: ${platformName}`); + } + + const options: Record = { + // Don't inherit parent's environment to avoid potential issues + env: { + ...process.env, + // Ensure we're not in a shell that might interpret special characters + SHELL: undefined, + }, + // Detach the browser process so it doesn't block + detached: true, + stdio: 'ignore', + }; + + try { + await execFileAsync(command, args, options); + } catch (error) { + // For Linux, try fallback commands if xdg-open fails + if ( + (platformName === 'linux' || + platformName === 'freebsd' || + platformName === 'openbsd') && + command === 'xdg-open' + ) { + const fallbackCommands = [ + 'gnome-open', + 'kde-open', + 'firefox', + 'chromium', + 'google-chrome', + ]; + + for (const fallbackCommand of fallbackCommands) { + try { + await execFileAsync(fallbackCommand, [url], options); + return; // Success! + } catch { + // Try next command + continue; + } + } + } + + // Re-throw the error if all attempts failed + throw new Error( + `Failed to open browser: ${error instanceof Error ? error.message : 'Unknown error'}`, + ); + } +} + +/** + * Checks if the current environment should attempt to launch a browser. + * This is the same logic as in browser.ts for consistency. + * + * @returns True if the tool should attempt to launch a browser + */ +export function shouldLaunchBrowser(): boolean { + // A list of browser names that indicate we should not attempt to open a + // web browser for the user. + const browserBlocklist = ['www-browser']; + const browserEnv = process.env.BROWSER; + if (browserEnv && browserBlocklist.includes(browserEnv)) { + return false; + } + + // Common environment variables used in CI/CD or other non-interactive shells. + if (process.env.CI || process.env.DEBIAN_FRONTEND === 'noninteractive') { + return false; + } + + // The presence of SSH_CONNECTION indicates a remote session. + // We should not attempt to launch a browser unless a display is explicitly available + // (checked below for Linux). + const isSSH = !!process.env.SSH_CONNECTION; + + // On Linux, the presence of a display server is a strong indicator of a GUI. + if (platform() === 'linux') { + // These are environment variables that can indicate a running compositor on Linux. + const displayVariables = ['DISPLAY', 'WAYLAND_DISPLAY', 'MIR_SOCKET']; + const hasDisplay = displayVariables.some((v) => !!process.env[v]); + if (!hasDisplay) { + return false; + } + } + + // If in an SSH session on a non-Linux OS (e.g., macOS), don't launch browser. + // The Linux case is handled above (it's allowed if DISPLAY is set). + if (isSSH && platform() !== 'linux') { + return false; + } + + // For non-Linux OSes, we generally assume a GUI is available + // unless other signals (like SSH) suggest otherwise. + return true; +} diff --git a/packages/core/src/utils/summarizer.ts b/packages/core/src/utils/summarizer.ts index a038b8e3..b6e4f543 100644 --- a/packages/core/src/utils/summarizer.ts +++ b/packages/core/src/utils/summarizer.ts @@ -11,7 +11,7 @@ import { GenerateContentResponse, } from '@google/genai'; import { GeminiClient } from '../core/client.js'; -import { DEFAULT_GEMINI_FLASH_MODEL } from '../config/models.js'; +import { DEFAULT_GEMINI_FLASH_LITE_MODEL } from '../config/models.js'; import { getResponseText, partToString } from './partUtils.js'; /** @@ -86,7 +86,7 @@ export async function summarizeToolOutput( contents, toolOutputSummarizerConfig, abortSignal, - DEFAULT_GEMINI_FLASH_MODEL, + DEFAULT_GEMINI_FLASH_LITE_MODEL, )) as unknown as GenerateContentResponse; return getResponseText(parsedResponse) || textToSummarize; } catch (error) { diff --git a/packages/core/src/utils/workspaceContext.test.ts b/packages/core/src/utils/workspaceContext.test.ts new file mode 100644 index 00000000..67d06b62 --- /dev/null +++ b/packages/core/src/utils/workspaceContext.test.ts @@ -0,0 +1,283 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import { describe, it, expect, beforeEach, vi } from 'vitest'; +import * as fs from 'fs'; +import * as path from 'path'; +import { WorkspaceContext } from './workspaceContext.js'; + +vi.mock('fs'); + +describe('WorkspaceContext', () => { + let workspaceContext: WorkspaceContext; + // Use path module to create platform-agnostic paths + const mockCwd = path.resolve(path.sep, 'home', 'user', 'project'); + const mockExistingDir = path.resolve( + path.sep, + 'home', + 'user', + 'other-project', + ); + const mockNonExistentDir = path.resolve( + path.sep, + 'home', + 'user', + 'does-not-exist', + ); + const mockSymlinkDir = path.resolve(path.sep, 'home', 'user', 'symlink'); + const mockRealPath = path.resolve(path.sep, 'home', 'user', 'real-directory'); + + beforeEach(() => { + vi.resetAllMocks(); + + // Mock fs.existsSync + vi.mocked(fs.existsSync).mockImplementation((path) => { + const pathStr = path.toString(); + return ( + pathStr === mockCwd || + pathStr === mockExistingDir || + pathStr === mockSymlinkDir || + pathStr === mockRealPath + ); + }); + + // Mock fs.statSync + vi.mocked(fs.statSync).mockImplementation((path) => { + const pathStr = path.toString(); + if (pathStr === mockNonExistentDir) { + throw new Error('ENOENT'); + } + return { + isDirectory: () => true, + } as fs.Stats; + }); + + // Mock fs.realpathSync + vi.mocked(fs.realpathSync).mockImplementation((path) => { + const pathStr = path.toString(); + if (pathStr === mockSymlinkDir) { + return mockRealPath; + } + return pathStr; + }); + }); + + describe('initialization', () => { + it('should initialize with a single directory (cwd)', () => { + workspaceContext = new WorkspaceContext(mockCwd); + const directories = workspaceContext.getDirectories(); + expect(directories).toHaveLength(1); + expect(directories[0]).toBe(mockCwd); + }); + + it('should validate and resolve directories to absolute paths', () => { + const absolutePath = path.join(mockCwd, 'subdir'); + vi.mocked(fs.existsSync).mockImplementation( + (p) => p === mockCwd || p === absolutePath, + ); + vi.mocked(fs.realpathSync).mockImplementation((p) => p.toString()); + + workspaceContext = new WorkspaceContext(mockCwd, [absolutePath]); + const directories = workspaceContext.getDirectories(); + expect(directories).toContain(absolutePath); + }); + + it('should reject non-existent directories', () => { + expect(() => { + new WorkspaceContext(mockCwd, [mockNonExistentDir]); + }).toThrow('Directory does not exist'); + }); + + it('should handle empty initialization', () => { + workspaceContext = new WorkspaceContext(mockCwd, []); + const directories = workspaceContext.getDirectories(); + expect(directories).toHaveLength(1); + expect(directories[0]).toBe(mockCwd); + }); + }); + + describe('adding directories', () => { + beforeEach(() => { + workspaceContext = new WorkspaceContext(mockCwd); + }); + + it('should add valid directories', () => { + workspaceContext.addDirectory(mockExistingDir); + const directories = workspaceContext.getDirectories(); + expect(directories).toHaveLength(2); + expect(directories).toContain(mockExistingDir); + }); + + it('should resolve relative paths to absolute', () => { + // Since we can't mock path.resolve, we'll test with absolute paths + workspaceContext.addDirectory(mockExistingDir); + const directories = workspaceContext.getDirectories(); + expect(directories).toContain(mockExistingDir); + }); + + it('should reject non-existent directories', () => { + expect(() => { + workspaceContext.addDirectory(mockNonExistentDir); + }).toThrow('Directory does not exist'); + }); + + it('should prevent duplicate directories', () => { + workspaceContext.addDirectory(mockExistingDir); + workspaceContext.addDirectory(mockExistingDir); + const directories = workspaceContext.getDirectories(); + expect(directories.filter((d) => d === mockExistingDir)).toHaveLength(1); + }); + + it('should handle symbolic links correctly', () => { + workspaceContext.addDirectory(mockSymlinkDir); + const directories = workspaceContext.getDirectories(); + expect(directories).toContain(mockRealPath); + expect(directories).not.toContain(mockSymlinkDir); + }); + }); + + describe('path validation', () => { + beforeEach(() => { + workspaceContext = new WorkspaceContext(mockCwd, [mockExistingDir]); + }); + + it('should accept paths within workspace directories', () => { + const validPath1 = path.join(mockCwd, 'src', 'file.ts'); + const validPath2 = path.join(mockExistingDir, 'lib', 'module.js'); + + expect(workspaceContext.isPathWithinWorkspace(validPath1)).toBe(true); + expect(workspaceContext.isPathWithinWorkspace(validPath2)).toBe(true); + }); + + it('should reject paths outside workspace', () => { + const invalidPath = path.resolve( + path.dirname(mockCwd), + 'outside-workspace', + 'file.txt', + ); + expect(workspaceContext.isPathWithinWorkspace(invalidPath)).toBe(false); + }); + + it('should resolve symbolic links before validation', () => { + const symlinkPath = path.join(mockCwd, 'symlink-file'); + const realPath = path.join(mockCwd, 'real-file'); + + vi.mocked(fs.existsSync).mockReturnValue(true); + vi.mocked(fs.realpathSync).mockImplementation((p) => { + if (p === symlinkPath) { + return realPath; + } + return p.toString(); + }); + + expect(workspaceContext.isPathWithinWorkspace(symlinkPath)).toBe(true); + }); + + it('should handle nested directories correctly', () => { + const nestedPath = path.join( + mockCwd, + 'deeply', + 'nested', + 'path', + 'file.txt', + ); + expect(workspaceContext.isPathWithinWorkspace(nestedPath)).toBe(true); + }); + + it('should handle edge cases (root, parent references)', () => { + const rootPath = '/'; + const parentPath = path.dirname(mockCwd); + + expect(workspaceContext.isPathWithinWorkspace(rootPath)).toBe(false); + expect(workspaceContext.isPathWithinWorkspace(parentPath)).toBe(false); + }); + + it('should handle non-existent paths correctly', () => { + const nonExistentPath = path.join(mockCwd, 'does-not-exist.txt'); + vi.mocked(fs.existsSync).mockImplementation((p) => p !== nonExistentPath); + + // Should still validate based on path structure + expect(workspaceContext.isPathWithinWorkspace(nonExistentPath)).toBe( + true, + ); + }); + }); + + describe('getDirectories', () => { + it('should return a copy of directories array', () => { + workspaceContext = new WorkspaceContext(mockCwd); + const dirs1 = workspaceContext.getDirectories(); + const dirs2 = workspaceContext.getDirectories(); + + expect(dirs1).not.toBe(dirs2); // Different array instances + expect(dirs1).toEqual(dirs2); // Same content + }); + }); + + describe('symbolic link security', () => { + beforeEach(() => { + workspaceContext = new WorkspaceContext(mockCwd); + }); + + it('should follow symlinks but validate resolved path', () => { + const symlinkInsideWorkspace = path.join(mockCwd, 'link-to-subdir'); + const resolvedInsideWorkspace = path.join(mockCwd, 'subdir'); + + vi.mocked(fs.existsSync).mockReturnValue(true); + vi.mocked(fs.realpathSync).mockImplementation((p) => { + if (p === symlinkInsideWorkspace) { + return resolvedInsideWorkspace; + } + return p.toString(); + }); + + expect( + workspaceContext.isPathWithinWorkspace(symlinkInsideWorkspace), + ).toBe(true); + }); + + it('should prevent sandbox escape via symlinks', () => { + const symlinkEscape = path.join(mockCwd, 'escape-link'); + const resolvedOutside = path.resolve(mockCwd, '..', 'outside-file'); + + vi.mocked(fs.existsSync).mockImplementation((p) => { + const pathStr = p.toString(); + return ( + pathStr === symlinkEscape || + pathStr === resolvedOutside || + pathStr === mockCwd + ); + }); + vi.mocked(fs.realpathSync).mockImplementation((p) => { + if (p.toString() === symlinkEscape) { + return resolvedOutside; + } + return p.toString(); + }); + vi.mocked(fs.statSync).mockImplementation( + (p) => + ({ + isDirectory: () => p.toString() !== resolvedOutside, + }) as fs.Stats, + ); + + workspaceContext = new WorkspaceContext(mockCwd); + expect(workspaceContext.isPathWithinWorkspace(symlinkEscape)).toBe(false); + }); + + it('should handle circular symlinks', () => { + const circularLink = path.join(mockCwd, 'circular'); + + vi.mocked(fs.existsSync).mockReturnValue(true); + vi.mocked(fs.realpathSync).mockImplementation(() => { + throw new Error('ELOOP: too many symbolic links encountered'); + }); + + // Should handle the error gracefully + expect(workspaceContext.isPathWithinWorkspace(circularLink)).toBe(false); + }); + }); +}); diff --git a/packages/core/src/utils/workspaceContext.ts b/packages/core/src/utils/workspaceContext.ts new file mode 100644 index 00000000..16d1b4c9 --- /dev/null +++ b/packages/core/src/utils/workspaceContext.ts @@ -0,0 +1,127 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import * as fs from 'fs'; +import * as path from 'path'; + +/** + * WorkspaceContext manages multiple workspace directories and validates paths + * against them. This allows the CLI to operate on files from multiple directories + * in a single session. + */ +export class WorkspaceContext { + private directories: Set; + + /** + * Creates a new WorkspaceContext with the given initial directory and optional additional directories. + * @param initialDirectory The initial working directory (usually cwd) + * @param additionalDirectories Optional array of additional directories to include + */ + constructor(initialDirectory: string, additionalDirectories: string[] = []) { + this.directories = new Set(); + + this.addDirectoryInternal(initialDirectory); + + for (const dir of additionalDirectories) { + this.addDirectoryInternal(dir); + } + } + + /** + * Adds a directory to the workspace. + * @param directory The directory path to add (can be relative or absolute) + * @param basePath Optional base path for resolving relative paths (defaults to cwd) + */ + addDirectory(directory: string, basePath: string = process.cwd()): void { + this.addDirectoryInternal(directory, basePath); + } + + /** + * Internal method to add a directory with validation. + */ + private addDirectoryInternal( + directory: string, + basePath: string = process.cwd(), + ): void { + const absolutePath = path.isAbsolute(directory) + ? directory + : path.resolve(basePath, directory); + + if (!fs.existsSync(absolutePath)) { + throw new Error(`Directory does not exist: ${absolutePath}`); + } + + const stats = fs.statSync(absolutePath); + if (!stats.isDirectory()) { + throw new Error(`Path is not a directory: ${absolutePath}`); + } + + let realPath: string; + try { + realPath = fs.realpathSync(absolutePath); + } catch (_error) { + throw new Error(`Failed to resolve path: ${absolutePath}`); + } + + this.directories.add(realPath); + } + + /** + * Gets a copy of all workspace directories. + * @returns Array of absolute directory paths + */ + getDirectories(): readonly string[] { + return Array.from(this.directories); + } + + /** + * Checks if a given path is within any of the workspace directories. + * @param pathToCheck The path to validate + * @returns True if the path is within the workspace, false otherwise + */ + isPathWithinWorkspace(pathToCheck: string): boolean { + try { + const absolutePath = path.resolve(pathToCheck); + + let resolvedPath = absolutePath; + if (fs.existsSync(absolutePath)) { + try { + resolvedPath = fs.realpathSync(absolutePath); + } catch (_error) { + return false; + } + } + + for (const dir of this.directories) { + if (this.isPathWithinRoot(resolvedPath, dir)) { + return true; + } + } + + return false; + } catch (_error) { + return false; + } + } + + /** + * Checks if a path is within a given root directory. + * @param pathToCheck The absolute path to check + * @param rootDirectory The absolute root directory + * @returns True if the path is within the root directory, false otherwise + */ + private isPathWithinRoot( + pathToCheck: string, + rootDirectory: string, + ): boolean { + const relative = path.relative(rootDirectory, pathToCheck); + return ( + !relative.startsWith(`..${path.sep}`) && + relative !== '..' && + !path.isAbsolute(relative) + ); + } +} diff --git a/packages/vscode-ide-companion/.vscodeignore b/packages/vscode-ide-companion/.vscodeignore index be532ef9..e74d0536 100644 --- a/packages/vscode-ide-companion/.vscodeignore +++ b/packages/vscode-ide-companion/.vscodeignore @@ -3,4 +3,5 @@ ../ ../../ !LICENSE +!NOTICES.txt !assets/ diff --git a/packages/vscode-ide-companion/NOTICES.txt b/packages/vscode-ide-companion/NOTICES.txt new file mode 100644 index 00000000..56f3d4f3 --- /dev/null +++ b/packages/vscode-ide-companion/NOTICES.txt @@ -0,0 +1,114 @@ +This file contains third-party software notices and license terms. + +============================================================ +@modelcontextprotocol/sdk@^1.15.1 +(git+https://github.com/modelcontextprotocol/typescript-sdk.git) + +MIT License + +Copyright (c) 2024 Anthropic, PBC + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. + + +============================================================ +cors@^2.8.5 +(No repository found) + +(The MIT License) + +Copyright (c) 2013 Troy Goode + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +'Software'), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. +IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY +CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, +TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE +SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + + +============================================================ +express@^5.1.0 +(No repository found) + +(The MIT License) + +Copyright (c) 2009-2014 TJ Holowaychuk +Copyright (c) 2013-2014 Roman Shtylman +Copyright (c) 2014-2015 Douglas Christopher Wilson + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +'Software'), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. +IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY +CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, +TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE +SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + + +============================================================ +zod@^3.25.76 +(git+https://github.com/colinhacks/zod.git) + +MIT License + +Copyright (c) 2025 Colin McDonnell + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. + + diff --git a/packages/vscode-ide-companion/esbuild.js b/packages/vscode-ide-companion/esbuild.js index 522542db..060be7c6 100644 --- a/packages/vscode-ide-companion/esbuild.js +++ b/packages/vscode-ide-companion/esbuild.js @@ -4,7 +4,7 @@ * SPDX-License-Identifier: Apache-2.0 */ -const esbuild = require('esbuild'); +import esbuild from 'esbuild'; const production = process.argv.includes('--production'); const watch = process.argv.includes('--watch'); @@ -40,7 +40,7 @@ async function main() { sourcemap: !production, sourcesContent: false, platform: 'node', - outfile: 'dist/extension.js', + outfile: 'dist/extension.cjs', external: ['vscode'], logLevel: 'silent', plugins: [ diff --git a/packages/vscode-ide-companion/package.json b/packages/vscode-ide-companion/package.json index a2ea4a5b..471e7296 100644 --- a/packages/vscode-ide-companion/package.json +++ b/packages/vscode-ide-companion/package.json @@ -31,22 +31,79 @@ "onStartupFinished" ], "contributes": { + "languages": [ + { + "id": "qwen-diff-editable" + } + ], "commands": [ { - "command": "qwen-code.runQwenCode", + "command": "qwen.diff.accept", + "title": "Qwen Code: Accept Current Diff", + "icon": "$(check)" + }, + { + "command": "qwen.diff.cancel", + "title": "Cancel", + "icon": "$(close)" + }, + { + "command": "qwen-code.runGeminiCLI", "title": "Qwen Code: Run" + }, + { + "command": "qwen-code.showNotices", + "title": "Qwen Code: View Third-Party Notices" + } + ], + "menus": { + "commandPalette": [ + { + "command": "qwen.diff.accept", + "when": "qwen.diff.isVisible" + }, + { + "command": "qwen.diff.cancel", + "when": "qwen.diff.isVisible" + } + ], + "editor/title": [ + { + "command": "qwen.diff.accept", + "when": "qwen.diff.isVisible", + "group": "navigation" + }, + { + "command": "qwen.diff.cancel", + "when": "qwen.diff.isVisible", + "group": "navigation" + } + ] + }, + "keybindings": [ + { + "command": "qwen.diff.accept", + "key": "ctrl+s", + "when": "qwen.diff.isVisible" + }, + { + "command": "qwen.diff.accept", + "key": "cmd+s", + "when": "qwen.diff.isVisible" } ] }, - "main": "./dist/extension.js", + "main": "./dist/extension.cjs", + "type": "module", "scripts": { - "vscode:prepublish": "npm run check-types && npm run lint && node esbuild.js --production", + "vscode:prepublish": "npm run generate:notices && npm run check-types && npm run lint && node esbuild.js --production", "build": "npm run compile", "compile": "npm run check-types && npm run lint && node esbuild.js", "watch": "npm-run-all -p watch:*", "watch:esbuild": "node esbuild.js --watch", "watch:tsc": "tsc --noEmit --watch --project tsconfig.json", "package": "vsce package --no-dependencies", + "generate:notices": "node ./scripts/generate-notices.js", "check-types": "tsc --noEmit", "lint": "eslint src", "test": "vitest run", diff --git a/packages/vscode-ide-companion/scripts/generate-notices.js b/packages/vscode-ide-companion/scripts/generate-notices.js new file mode 100644 index 00000000..55dc3108 --- /dev/null +++ b/packages/vscode-ide-companion/scripts/generate-notices.js @@ -0,0 +1,105 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import fs from 'fs/promises'; +import path from 'path'; +import { fileURLToPath } from 'url'; + +const projectRoot = path.resolve( + path.join(path.dirname(fileURLToPath(import.meta.url)), '..', '..', '..'), +); +const packagePath = path.join(projectRoot, 'packages', 'vscode-ide-companion'); +const noticeFilePath = path.join(packagePath, 'NOTICES.txt'); + +async function getDependencyLicense(depName, depVersion) { + let depPackageJsonPath; + let licenseContent = 'License text not found.'; + let repositoryUrl = 'No repository found'; + + try { + depPackageJsonPath = path.join( + projectRoot, + 'node_modules', + depName, + 'package.json', + ); + if (!(await fs.stat(depPackageJsonPath).catch(() => false))) { + depPackageJsonPath = path.join( + packagePath, + 'node_modules', + depName, + 'package.json', + ); + } + + const depPackageJsonContent = await fs.readFile( + depPackageJsonPath, + 'utf-8', + ); + const depPackageJson = JSON.parse(depPackageJsonContent); + + repositoryUrl = depPackageJson.repository?.url || repositoryUrl; + + const licenseFile = depPackageJson.licenseFile + ? path.join(path.dirname(depPackageJsonPath), depPackageJson.licenseFile) + : path.join(path.dirname(depPackageJsonPath), 'LICENSE'); + + try { + licenseContent = await fs.readFile(licenseFile, 'utf-8'); + } catch (e) { + console.warn( + `Warning: Failed to read license file for ${depName}: ${e.message}`, + ); + } + } catch (e) { + console.warn( + `Warning: Could not find package.json for ${depName}: ${e.message}`, + ); + } + + return { + name: depName, + version: depVersion, + repository: repositoryUrl, + license: licenseContent, + }; +} + +async function main() { + try { + const packageJsonPath = path.join(packagePath, 'package.json'); + const packageJsonContent = await fs.readFile(packageJsonPath, 'utf-8'); + const packageJson = JSON.parse(packageJsonContent); + + const dependencies = packageJson.dependencies || {}; + const dependencyEntries = Object.entries(dependencies); + + const licensePromises = dependencyEntries.map(([depName, depVersion]) => + getDependencyLicense(depName, depVersion), + ); + + const dependencyLicenses = await Promise.all(licensePromises); + + let noticeText = + 'This file contains third-party software notices and license terms.\n\n'; + + for (const dep of dependencyLicenses) { + noticeText += + '============================================================\n'; + noticeText += `${dep.name}@${dep.version}\n`; + noticeText += `(${dep.repository})\n\n`; + noticeText += `${dep.license}\n\n`; + } + + await fs.writeFile(noticeFilePath, noticeText); + console.log(`NOTICES.txt generated at ${noticeFilePath}`); + } catch (error) { + console.error('Error generating NOTICES.txt:', error); + process.exit(1); + } +} + +main().catch(console.error); diff --git a/packages/vscode-ide-companion/src/diff-manager.ts b/packages/vscode-ide-companion/src/diff-manager.ts new file mode 100644 index 00000000..159a6101 --- /dev/null +++ b/packages/vscode-ide-companion/src/diff-manager.ts @@ -0,0 +1,228 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import * as vscode from 'vscode'; +import * as path from 'node:path'; +import { DIFF_SCHEME } from './extension.js'; +import { type JSONRPCNotification } from '@modelcontextprotocol/sdk/types.js'; + +export class DiffContentProvider implements vscode.TextDocumentContentProvider { + private content = new Map(); + private onDidChangeEmitter = new vscode.EventEmitter(); + + get onDidChange(): vscode.Event { + return this.onDidChangeEmitter.event; + } + + provideTextDocumentContent(uri: vscode.Uri): string { + return this.content.get(uri.toString()) ?? ''; + } + + setContent(uri: vscode.Uri, content: string): void { + this.content.set(uri.toString(), content); + this.onDidChangeEmitter.fire(uri); + } + + deleteContent(uri: vscode.Uri): void { + this.content.delete(uri.toString()); + } + + getContent(uri: vscode.Uri): string | undefined { + return this.content.get(uri.toString()); + } +} + +// Information about a diff view that is currently open. +interface DiffInfo { + originalFilePath: string; + newContent: string; + rightDocUri: vscode.Uri; +} + +/** + * Manages the state and lifecycle of diff views within the IDE. + */ +export class DiffManager { + private readonly onDidChangeEmitter = + new vscode.EventEmitter(); + readonly onDidChange = this.onDidChangeEmitter.event; + private diffDocuments = new Map(); + + constructor( + private readonly logger: vscode.OutputChannel, + private readonly diffContentProvider: DiffContentProvider, + ) {} + + /** + * Creates and shows a new diff view. + */ + async showDiff(filePath: string, newContent: string) { + const fileUri = vscode.Uri.file(filePath); + + const rightDocUri = vscode.Uri.from({ + scheme: DIFF_SCHEME, + path: filePath, + // cache busting + query: `rand=${Math.random()}`, + }); + this.diffContentProvider.setContent(rightDocUri, newContent); + + this.addDiffDocument(rightDocUri, { + originalFilePath: filePath, + newContent, + rightDocUri, + }); + + const diffTitle = `${path.basename(filePath)} ↔ Modified`; + await vscode.commands.executeCommand( + 'setContext', + 'gemini.diff.isVisible', + true, + ); + + let leftDocUri; + try { + await vscode.workspace.fs.stat(fileUri); + leftDocUri = fileUri; + } catch { + // We need to provide an empty document to diff against. + // Using the 'untitled' scheme is one way to do this. + leftDocUri = vscode.Uri.from({ + scheme: 'untitled', + path: filePath, + }); + } + + await vscode.commands.executeCommand( + 'vscode.diff', + leftDocUri, + rightDocUri, + diffTitle, + { + preview: false, + }, + ); + await vscode.commands.executeCommand( + 'workbench.action.files.setActiveEditorWriteableInSession', + ); + } + + /** + * Closes an open diff view for a specific file. + */ + async closeDiff(filePath: string) { + let uriToClose: vscode.Uri | undefined; + for (const [uriString, diffInfo] of this.diffDocuments.entries()) { + if (diffInfo.originalFilePath === filePath) { + uriToClose = vscode.Uri.parse(uriString); + break; + } + } + + if (uriToClose) { + const rightDoc = await vscode.workspace.openTextDocument(uriToClose); + const modifiedContent = rightDoc.getText(); + await this.closeDiffEditor(uriToClose); + this.onDidChangeEmitter.fire({ + jsonrpc: '2.0', + method: 'ide/diffClosed', + params: { + filePath, + content: modifiedContent, + }, + }); + vscode.window.showInformationMessage(`Diff for ${filePath} closed.`); + } else { + vscode.window.showWarningMessage(`No open diff found for ${filePath}.`); + } + } + + /** + * User accepts the changes in a diff view. Does not apply changes. + */ + async acceptDiff(rightDocUri: vscode.Uri) { + const diffInfo = this.diffDocuments.get(rightDocUri.toString()); + if (!diffInfo) { + this.logger.appendLine( + `No diff info found for ${rightDocUri.toString()}`, + ); + return; + } + + const rightDoc = await vscode.workspace.openTextDocument(rightDocUri); + const modifiedContent = rightDoc.getText(); + await this.closeDiffEditor(rightDocUri); + + this.onDidChangeEmitter.fire({ + jsonrpc: '2.0', + method: 'ide/diffAccepted', + params: { + filePath: diffInfo.originalFilePath, + content: modifiedContent, + }, + }); + } + + /** + * Called when a user cancels a diff view. + */ + async cancelDiff(rightDocUri: vscode.Uri) { + const diffInfo = this.diffDocuments.get(rightDocUri.toString()); + if (!diffInfo) { + this.logger.appendLine( + `No diff info found for ${rightDocUri.toString()}`, + ); + // Even if we don't have diff info, we should still close the editor. + await this.closeDiffEditor(rightDocUri); + return; + } + + const rightDoc = await vscode.workspace.openTextDocument(rightDocUri); + const modifiedContent = rightDoc.getText(); + await this.closeDiffEditor(rightDocUri); + + this.onDidChangeEmitter.fire({ + jsonrpc: '2.0', + method: 'ide/diffClosed', + params: { + filePath: diffInfo.originalFilePath, + content: modifiedContent, + }, + }); + } + + private addDiffDocument(uri: vscode.Uri, diffInfo: DiffInfo) { + this.diffDocuments.set(uri.toString(), diffInfo); + } + + private async closeDiffEditor(rightDocUri: vscode.Uri) { + const diffInfo = this.diffDocuments.get(rightDocUri.toString()); + await vscode.commands.executeCommand( + 'setContext', + 'gemini.diff.isVisible', + false, + ); + + if (diffInfo) { + this.diffDocuments.delete(rightDocUri.toString()); + this.diffContentProvider.deleteContent(rightDocUri); + } + + // Find and close the tab corresponding to the diff view + for (const tabGroup of vscode.window.tabGroups.all) { + for (const tab of tabGroup.tabs) { + const input = tab.input as { + modified?: vscode.Uri; + original?: vscode.Uri; + }; + if (input && input.modified?.toString() === rightDocUri.toString()) { + await vscode.window.tabGroups.close(tab); + return; + } + } + } + } +} diff --git a/packages/vscode-ide-companion/src/extension.ts b/packages/vscode-ide-companion/src/extension.ts index 647acae3..b31e15b8 100644 --- a/packages/vscode-ide-companion/src/extension.ts +++ b/packages/vscode-ide-companion/src/extension.ts @@ -5,18 +5,75 @@ */ import * as vscode from 'vscode'; -import { IDEServer } from './ide-server'; -import { createLogger } from './utils/logger'; +import { IDEServer } from './ide-server.js'; +import { DiffContentProvider, DiffManager } from './diff-manager.js'; +import { createLogger } from './utils/logger.js'; + +const IDE_WORKSPACE_PATH_ENV_VAR = 'GEMINI_CLI_IDE_WORKSPACE_PATH'; +export const DIFF_SCHEME = 'gemini-diff'; let ideServer: IDEServer; let logger: vscode.OutputChannel; + let log: (message: string) => void = () => {}; +function updateWorkspacePath(context: vscode.ExtensionContext) { + const workspaceFolders = vscode.workspace.workspaceFolders; + if (workspaceFolders && workspaceFolders.length === 1) { + const workspaceFolder = workspaceFolders[0]; + context.environmentVariableCollection.replace( + IDE_WORKSPACE_PATH_ENV_VAR, + workspaceFolder.uri.fsPath, + ); + } else { + context.environmentVariableCollection.replace( + IDE_WORKSPACE_PATH_ENV_VAR, + '', + ); + } +} + export async function activate(context: vscode.ExtensionContext) { logger = vscode.window.createOutputChannel('Gemini CLI IDE Companion'); log = createLogger(context, logger); log('Extension activated'); - ideServer = new IDEServer(log); + + updateWorkspacePath(context); + + const diffContentProvider = new DiffContentProvider(); + const diffManager = new DiffManager(logger, diffContentProvider); + + context.subscriptions.push( + vscode.workspace.onDidCloseTextDocument((doc) => { + if (doc.uri.scheme === DIFF_SCHEME) { + diffManager.cancelDiff(doc.uri); + } + }), + vscode.workspace.registerTextDocumentContentProvider( + DIFF_SCHEME, + diffContentProvider, + ), + vscode.commands.registerCommand( + 'gemini.diff.accept', + (uri?: vscode.Uri) => { + const docUri = uri ?? vscode.window.activeTextEditor?.document.uri; + if (docUri && docUri.scheme === DIFF_SCHEME) { + diffManager.acceptDiff(docUri); + } + }, + ), + vscode.commands.registerCommand( + 'gemini.diff.cancel', + (uri?: vscode.Uri) => { + const docUri = uri ?? vscode.window.activeTextEditor?.document.uri; + if (docUri && docUri.scheme === DIFF_SCHEME) { + diffManager.cancelDiff(docUri); + } + }, + ), + ); + + ideServer = new IDEServer(log, diffManager); try { await ideServer.start(context); } catch (err) { @@ -25,12 +82,22 @@ export async function activate(context: vscode.ExtensionContext) { } context.subscriptions.push( + vscode.workspace.onDidChangeWorkspaceFolders(() => { + updateWorkspacePath(context); + }), vscode.commands.registerCommand('gemini-cli.runGeminiCLI', () => { const geminiCmd = 'gemini'; const terminal = vscode.window.createTerminal(`Gemini CLI`); terminal.show(); terminal.sendText(geminiCmd); }), + vscode.commands.registerCommand('gemini-cli.showNotices', async () => { + const noticePath = vscode.Uri.joinPath( + context.extensionUri, + 'NOTICES.txt', + ); + await vscode.window.showTextDocument(noticePath); + }), ); } diff --git a/packages/vscode-ide-companion/src/ide-server.ts b/packages/vscode-ide-companion/src/ide-server.ts index f47463ba..30215ccc 100644 --- a/packages/vscode-ide-companion/src/ide-server.ts +++ b/packages/vscode-ide-companion/src/ide-server.ts @@ -14,49 +14,27 @@ import { type JSONRPCNotification, } from '@modelcontextprotocol/sdk/types.js'; import { Server as HTTPServer } from 'node:http'; -import { RecentFilesManager } from './recent-files-manager.js'; +import { z } from 'zod'; +import { DiffManager } from './diff-manager.js'; +import { OpenFilesManager } from './open-files-manager.js'; const MCP_SESSION_ID_HEADER = 'mcp-session-id'; const IDE_SERVER_PORT_ENV_VAR = 'GEMINI_CLI_IDE_SERVER_PORT'; -const MAX_SELECTED_TEXT_LENGTH = 16384; // 16 KiB limit -function sendOpenFilesChangedNotification( +function sendIdeContextUpdateNotification( transport: StreamableHTTPServerTransport, log: (message: string) => void, - recentFilesManager: RecentFilesManager, + openFilesManager: OpenFilesManager, ) { - const editor = vscode.window.activeTextEditor; - const filePath = - editor && editor.document.uri.scheme === 'file' - ? editor.document.uri.fsPath - : ''; - const selection = editor?.selection; - const cursor = selection - ? { - // This value is a zero-based index, but the vscode IDE is one-based. - line: selection.active.line + 1, - character: selection.active.character, - } - : undefined; - let selectedText = editor?.document.getText(selection) ?? undefined; - if (selectedText && selectedText.length > MAX_SELECTED_TEXT_LENGTH) { - selectedText = - selectedText.substring(0, MAX_SELECTED_TEXT_LENGTH) + '... [TRUNCATED]'; - } + const ideContext = openFilesManager.state; + const notification: JSONRPCNotification = { jsonrpc: '2.0', - method: 'ide/openFilesChanged', - params: { - activeFile: filePath, - recentOpenFiles: recentFilesManager.recentFiles.filter( - (file) => file.filePath !== filePath, - ), - cursor, - selectedText, - }, + method: 'ide/contextUpdate', + params: ideContext, }; log( - `Sending active file changed notification: ${JSON.stringify( + `Sending IDE context update notification: ${JSON.stringify( notification, null, 2, @@ -69,32 +47,42 @@ export class IDEServer { private server: HTTPServer | undefined; private context: vscode.ExtensionContext | undefined; private log: (message: string) => void; + diffManager: DiffManager; - constructor(log: (message: string) => void) { + constructor(log: (message: string) => void, diffManager: DiffManager) { this.log = log; + this.diffManager = diffManager; } async start(context: vscode.ExtensionContext) { this.context = context; + const sessionsWithInitialNotification = new Set(); const transports: { [sessionId: string]: StreamableHTTPServerTransport } = {}; - const sessionsWithInitialNotification = new Set(); const app = express(); app.use(express.json()); - const mcpServer = createMcpServer(); + const mcpServer = createMcpServer(this.diffManager); - const recentFilesManager = new RecentFilesManager(context); - const onDidChangeSubscription = recentFilesManager.onDidChange(() => { + const openFilesManager = new OpenFilesManager(context); + const onDidChangeSubscription = openFilesManager.onDidChange(() => { for (const transport of Object.values(transports)) { - sendOpenFilesChangedNotification( + sendIdeContextUpdateNotification( transport, this.log.bind(this), - recentFilesManager, + openFilesManager, ); } }); context.subscriptions.push(onDidChangeSubscription); + const onDidChangeDiffSubscription = this.diffManager.onDidChange( + (notification: JSONRPCNotification) => { + for (const transport of Object.values(transports)) { + transport.send(notification); + } + }, + ); + context.subscriptions.push(onDidChangeDiffSubscription); app.post('/mcp', async (req: Request, res: Response) => { const sessionId = req.headers[MCP_SESSION_ID_HEADER] as @@ -112,7 +100,6 @@ export class IDEServer { transports[newSessionId] = transport; }, }); - const keepAlive = setInterval(() => { try { transport.send({ jsonrpc: '2.0', method: 'ping' }); @@ -191,10 +178,10 @@ export class IDEServer { } if (!sessionsWithInitialNotification.has(sessionId)) { - sendOpenFilesChangedNotification( + sendIdeContextUpdateNotification( transport, this.log.bind(this), - recentFilesManager, + openFilesManager, ); sessionsWithInitialNotification.add(sessionId); } @@ -236,7 +223,7 @@ export class IDEServer { } } -const createMcpServer = () => { +const createMcpServer = (diffManager: DiffManager) => { const server = new McpServer( { name: 'gemini-cli-companion-mcp-server', @@ -244,5 +231,54 @@ const createMcpServer = () => { }, { capabilities: { logging: {} } }, ); + server.registerTool( + 'openDiff', + { + description: + '(IDE Tool) Open a diff view to create or modify a file. Returns a notification once the diff has been accepted or rejcted.', + inputSchema: z.object({ + filePath: z.string(), + // TODO(chrstn): determine if this should be required or not. + newContent: z.string().optional(), + }).shape, + }, + async ({ + filePath, + newContent, + }: { + filePath: string; + newContent?: string; + }) => { + await diffManager.showDiff(filePath, newContent ?? ''); + return { + content: [ + { + type: 'text', + text: `Showing diff for ${filePath}`, + }, + ], + }; + }, + ); + server.registerTool( + 'closeDiff', + { + description: '(IDE Tool) Close an open diff view for a specific file.', + inputSchema: z.object({ + filePath: z.string(), + }).shape, + }, + async ({ filePath }: { filePath: string }) => { + await diffManager.closeDiff(filePath); + return { + content: [ + { + type: 'text', + text: `Closed diff for ${filePath}`, + }, + ], + }; + }, + ); return server; }; diff --git a/packages/vscode-ide-companion/src/open-files-manager.test.ts b/packages/vscode-ide-companion/src/open-files-manager.test.ts new file mode 100644 index 00000000..0b1ada82 --- /dev/null +++ b/packages/vscode-ide-companion/src/open-files-manager.test.ts @@ -0,0 +1,440 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'; +import * as vscode from 'vscode'; +import { OpenFilesManager, MAX_FILES } from './open-files-manager.js'; + +vi.mock('vscode', () => ({ + EventEmitter: vi.fn(() => { + const listeners: Array<(e: void) => unknown> = []; + return { + event: vi.fn((listener) => { + listeners.push(listener); + return { dispose: vi.fn() }; + }), + fire: vi.fn(() => { + listeners.forEach((listener) => listener(undefined)); + }), + dispose: vi.fn(), + }; + }), + window: { + onDidChangeActiveTextEditor: vi.fn(), + onDidChangeTextEditorSelection: vi.fn(), + }, + workspace: { + onDidDeleteFiles: vi.fn(), + onDidCloseTextDocument: vi.fn(), + onDidRenameFiles: vi.fn(), + }, + Uri: { + file: (path: string) => ({ + fsPath: path, + scheme: 'file', + }), + }, + TextEditorSelectionChangeKind: { + Mouse: 2, + }, +})); + +describe('OpenFilesManager', () => { + let context: vscode.ExtensionContext; + let onDidChangeActiveTextEditorListener: ( + editor: vscode.TextEditor | undefined, + ) => void; + let onDidChangeTextEditorSelectionListener: ( + e: vscode.TextEditorSelectionChangeEvent, + ) => void; + let onDidDeleteFilesListener: (e: vscode.FileDeleteEvent) => void; + let onDidCloseTextDocumentListener: (doc: vscode.TextDocument) => void; + let onDidRenameFilesListener: (e: vscode.FileRenameEvent) => void; + + beforeEach(() => { + vi.useFakeTimers(); + + vi.mocked(vscode.window.onDidChangeActiveTextEditor).mockImplementation( + (listener) => { + onDidChangeActiveTextEditorListener = listener; + return { dispose: vi.fn() }; + }, + ); + vi.mocked(vscode.window.onDidChangeTextEditorSelection).mockImplementation( + (listener) => { + onDidChangeTextEditorSelectionListener = listener; + return { dispose: vi.fn() }; + }, + ); + vi.mocked(vscode.workspace.onDidDeleteFiles).mockImplementation( + (listener) => { + onDidDeleteFilesListener = listener; + return { dispose: vi.fn() }; + }, + ); + vi.mocked(vscode.workspace.onDidCloseTextDocument).mockImplementation( + (listener) => { + onDidCloseTextDocumentListener = listener; + return { dispose: vi.fn() }; + }, + ); + vi.mocked(vscode.workspace.onDidRenameFiles).mockImplementation( + (listener) => { + onDidRenameFilesListener = listener; + return { dispose: vi.fn() }; + }, + ); + + context = { + subscriptions: [], + } as unknown as vscode.ExtensionContext; + }); + + afterEach(() => { + vi.restoreAllMocks(); + vi.useRealTimers(); + }); + + const getUri = (path: string) => + vscode.Uri.file(path) as unknown as vscode.Uri; + + const addFile = (uri: vscode.Uri) => { + onDidChangeActiveTextEditorListener({ + document: { + uri, + getText: () => '', + }, + selection: { + active: { line: 0, character: 0 }, + }, + } as unknown as vscode.TextEditor); + }; + + it('adds a file to the list', async () => { + const manager = new OpenFilesManager(context); + const uri = getUri('/test/file1.txt'); + addFile(uri); + await vi.advanceTimersByTimeAsync(100); + expect(manager.state.workspaceState!.openFiles).toHaveLength(1); + expect(manager.state.workspaceState!.openFiles![0].path).toBe( + '/test/file1.txt', + ); + }); + + it('moves an existing file to the top', async () => { + const manager = new OpenFilesManager(context); + const uri1 = getUri('/test/file1.txt'); + const uri2 = getUri('/test/file2.txt'); + addFile(uri1); + addFile(uri2); + addFile(uri1); + await vi.advanceTimersByTimeAsync(100); + expect(manager.state.workspaceState!.openFiles).toHaveLength(2); + expect(manager.state.workspaceState!.openFiles![0].path).toBe( + '/test/file1.txt', + ); + }); + + it('does not exceed the max number of files', async () => { + const manager = new OpenFilesManager(context); + for (let i = 0; i < MAX_FILES + 5; i++) { + const uri = getUri(`/test/file${i}.txt`); + addFile(uri); + } + await vi.advanceTimersByTimeAsync(100); + expect(manager.state.workspaceState!.openFiles).toHaveLength(MAX_FILES); + expect(manager.state.workspaceState!.openFiles![0].path).toBe( + `/test/file${MAX_FILES + 4}.txt`, + ); + expect(manager.state.workspaceState!.openFiles![MAX_FILES - 1].path).toBe( + `/test/file5.txt`, + ); + }); + + it('fires onDidChange when a file is added', async () => { + const manager = new OpenFilesManager(context); + const onDidChangeSpy = vi.fn(); + manager.onDidChange(onDidChangeSpy); + + const uri = getUri('/test/file1.txt'); + addFile(uri); + + await vi.advanceTimersByTimeAsync(100); + expect(onDidChangeSpy).toHaveBeenCalled(); + }); + + it('removes a file when it is closed', async () => { + const manager = new OpenFilesManager(context); + const uri = getUri('/test/file1.txt'); + addFile(uri); + await vi.advanceTimersByTimeAsync(100); + expect(manager.state.workspaceState!.openFiles).toHaveLength(1); + + onDidCloseTextDocumentListener({ uri } as vscode.TextDocument); + await vi.advanceTimersByTimeAsync(100); + + expect(manager.state.workspaceState!.openFiles).toHaveLength(0); + }); + + it('fires onDidChange when a file is removed', async () => { + const manager = new OpenFilesManager(context); + const uri = getUri('/test/file1.txt'); + addFile(uri); + await vi.advanceTimersByTimeAsync(100); + + const onDidChangeSpy = vi.fn(); + manager.onDidChange(onDidChangeSpy); + + onDidCloseTextDocumentListener({ uri } as vscode.TextDocument); + await vi.advanceTimersByTimeAsync(100); + + expect(onDidChangeSpy).toHaveBeenCalled(); + }); + + it('removes a file when it is deleted', async () => { + const manager = new OpenFilesManager(context); + const uri1 = getUri('/test/file1.txt'); + const uri2 = getUri('/test/file2.txt'); + addFile(uri1); + addFile(uri2); + await vi.advanceTimersByTimeAsync(100); + expect(manager.state.workspaceState!.openFiles).toHaveLength(2); + + onDidDeleteFilesListener({ files: [uri1] }); + await vi.advanceTimersByTimeAsync(100); + + expect(manager.state.workspaceState!.openFiles).toHaveLength(1); + expect(manager.state.workspaceState!.openFiles![0].path).toBe( + '/test/file2.txt', + ); + }); + + it('fires onDidChange when a file is deleted', async () => { + const manager = new OpenFilesManager(context); + const uri = getUri('/test/file1.txt'); + addFile(uri); + await vi.advanceTimersByTimeAsync(100); + + const onDidChangeSpy = vi.fn(); + manager.onDidChange(onDidChangeSpy); + + onDidDeleteFilesListener({ files: [uri] }); + await vi.advanceTimersByTimeAsync(100); + + expect(onDidChangeSpy).toHaveBeenCalled(); + }); + + it('removes multiple files when they are deleted', async () => { + const manager = new OpenFilesManager(context); + const uri1 = getUri('/test/file1.txt'); + const uri2 = getUri('/test/file2.txt'); + const uri3 = getUri('/test/file3.txt'); + addFile(uri1); + addFile(uri2); + addFile(uri3); + await vi.advanceTimersByTimeAsync(100); + expect(manager.state.workspaceState!.openFiles).toHaveLength(3); + + onDidDeleteFilesListener({ files: [uri1, uri3] }); + await vi.advanceTimersByTimeAsync(100); + + expect(manager.state.workspaceState!.openFiles).toHaveLength(1); + expect(manager.state.workspaceState!.openFiles![0].path).toBe( + '/test/file2.txt', + ); + }); + + it('fires onDidChange only once when adding an existing file', async () => { + const manager = new OpenFilesManager(context); + const uri = getUri('/test/file1.txt'); + addFile(uri); + await vi.advanceTimersByTimeAsync(100); + + const onDidChangeSpy = vi.fn(); + manager.onDidChange(onDidChangeSpy); + + addFile(uri); + await vi.advanceTimersByTimeAsync(100); + expect(onDidChangeSpy).toHaveBeenCalledTimes(1); + }); + + it('updates the file when it is renamed', async () => { + const manager = new OpenFilesManager(context); + const oldUri = getUri('/test/file1.txt'); + const newUri = getUri('/test/file2.txt'); + addFile(oldUri); + await vi.advanceTimersByTimeAsync(100); + expect(manager.state.workspaceState!.openFiles).toHaveLength(1); + expect(manager.state.workspaceState!.openFiles![0].path).toBe( + '/test/file1.txt', + ); + + onDidRenameFilesListener({ files: [{ oldUri, newUri }] }); + await vi.advanceTimersByTimeAsync(100); + + expect(manager.state.workspaceState!.openFiles).toHaveLength(1); + expect(manager.state.workspaceState!.openFiles![0].path).toBe( + '/test/file2.txt', + ); + }); + + it('adds a file when the active editor changes', async () => { + const manager = new OpenFilesManager(context); + const uri = getUri('/test/file1.txt'); + + addFile(uri); + await vi.advanceTimersByTimeAsync(100); + + expect(manager.state.workspaceState!.openFiles).toHaveLength(1); + expect(manager.state.workspaceState!.openFiles![0].path).toBe( + '/test/file1.txt', + ); + }); + + it('updates the cursor position on selection change', async () => { + const manager = new OpenFilesManager(context); + const uri = getUri('/test/file1.txt'); + addFile(uri); + await vi.advanceTimersByTimeAsync(100); + + const selection = { + active: { line: 10, character: 20 }, + } as vscode.Selection; + + onDidChangeTextEditorSelectionListener({ + textEditor: { + document: { uri, getText: () => '' }, + selection, + } as vscode.TextEditor, + selections: [selection], + kind: vscode.TextEditorSelectionChangeKind.Mouse, + }); + + await vi.advanceTimersByTimeAsync(100); + + const file = manager.state.workspaceState!.openFiles![0]; + expect(file.cursor).toEqual({ line: 11, character: 20 }); + }); + + it('updates the selected text on selection change', async () => { + const manager = new OpenFilesManager(context); + const uri = getUri('/test/file1.txt'); + const selection = { + active: { line: 10, character: 20 }, + } as vscode.Selection; + + // We need to override the mock for getText for this test + const textEditor = { + document: { + uri, + getText: vi.fn().mockReturnValue('selected text'), + }, + selection, + } as unknown as vscode.TextEditor; + + onDidChangeActiveTextEditorListener(textEditor); + await vi.advanceTimersByTimeAsync(100); + + onDidChangeTextEditorSelectionListener({ + textEditor, + selections: [selection], + kind: vscode.TextEditorSelectionChangeKind.Mouse, + }); + + await vi.advanceTimersByTimeAsync(100); + + const file = manager.state.workspaceState!.openFiles![0]; + expect(file.selectedText).toBe('selected text'); + expect(textEditor.document.getText).toHaveBeenCalledWith(selection); + }); + + it('truncates long selected text', async () => { + const manager = new OpenFilesManager(context); + const uri = getUri('/test/file1.txt'); + const longText = 'a'.repeat(20000); + const truncatedText = longText.substring(0, 16384) + '... [TRUNCATED]'; + + const selection = { + active: { line: 10, character: 20 }, + } as vscode.Selection; + + const textEditor = { + document: { + uri, + getText: vi.fn().mockReturnValue(longText), + }, + selection, + } as unknown as vscode.TextEditor; + + onDidChangeActiveTextEditorListener(textEditor); + await vi.advanceTimersByTimeAsync(100); + + onDidChangeTextEditorSelectionListener({ + textEditor, + selections: [selection], + kind: vscode.TextEditorSelectionChangeKind.Mouse, + }); + + await vi.advanceTimersByTimeAsync(100); + + const file = manager.state.workspaceState!.openFiles![0]; + expect(file.selectedText).toBe(truncatedText); + }); + + it('deactivates the previously active file', async () => { + const manager = new OpenFilesManager(context); + const uri1 = getUri('/test/file1.txt'); + const uri2 = getUri('/test/file2.txt'); + + addFile(uri1); + await vi.advanceTimersByTimeAsync(100); + + const selection = { + active: { line: 10, character: 20 }, + } as vscode.Selection; + + onDidChangeTextEditorSelectionListener({ + textEditor: { + document: { uri: uri1, getText: () => '' }, + selection, + } as vscode.TextEditor, + selections: [selection], + kind: vscode.TextEditorSelectionChangeKind.Mouse, + }); + await vi.advanceTimersByTimeAsync(100); + + let file1 = manager.state.workspaceState!.openFiles![0]; + expect(file1.isActive).toBe(true); + expect(file1.cursor).toBeDefined(); + + addFile(uri2); + await vi.advanceTimersByTimeAsync(100); + + file1 = manager.state.workspaceState!.openFiles!.find( + (f) => f.path === '/test/file1.txt', + )!; + const file2 = manager.state.workspaceState!.openFiles![0]; + + expect(file1.isActive).toBe(false); + expect(file1.cursor).toBeUndefined(); + expect(file1.selectedText).toBeUndefined(); + expect(file2.path).toBe('/test/file2.txt'); + expect(file2.isActive).toBe(true); + }); + + it('ignores non-file URIs', async () => { + const manager = new OpenFilesManager(context); + const uri = { + fsPath: '/test/file1.txt', + scheme: 'untitled', + } as vscode.Uri; + + addFile(uri); + await vi.advanceTimersByTimeAsync(100); + + expect(manager.state.workspaceState!.openFiles).toHaveLength(0); + }); +}); diff --git a/packages/vscode-ide-companion/src/open-files-manager.ts b/packages/vscode-ide-companion/src/open-files-manager.ts new file mode 100644 index 00000000..8f4e4ad7 --- /dev/null +++ b/packages/vscode-ide-companion/src/open-files-manager.ts @@ -0,0 +1,178 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import * as vscode from 'vscode'; +import type { File, IdeContext } from '@qwen-code/qwen-code-core'; + +export const MAX_FILES = 10; +const MAX_SELECTED_TEXT_LENGTH = 16384; // 16 KiB limit + +/** + * Keeps track of the workspace state, including open files, cursor position, and selected text. + */ +export class OpenFilesManager { + private readonly onDidChangeEmitter = new vscode.EventEmitter(); + readonly onDidChange = this.onDidChangeEmitter.event; + private debounceTimer: NodeJS.Timeout | undefined; + private openFiles: File[] = []; + + constructor(private readonly context: vscode.ExtensionContext) { + const editorWatcher = vscode.window.onDidChangeActiveTextEditor( + (editor) => { + if (editor && this.isFileUri(editor.document.uri)) { + this.addOrMoveToFront(editor); + this.fireWithDebounce(); + } + }, + ); + + const selectionWatcher = vscode.window.onDidChangeTextEditorSelection( + (event) => { + if (this.isFileUri(event.textEditor.document.uri)) { + this.updateActiveContext(event.textEditor); + this.fireWithDebounce(); + } + }, + ); + + const closeWatcher = vscode.workspace.onDidCloseTextDocument((document) => { + if (this.isFileUri(document.uri)) { + this.remove(document.uri); + this.fireWithDebounce(); + } + }); + + const deleteWatcher = vscode.workspace.onDidDeleteFiles((event) => { + for (const uri of event.files) { + if (this.isFileUri(uri)) { + this.remove(uri); + } + } + this.fireWithDebounce(); + }); + + const renameWatcher = vscode.workspace.onDidRenameFiles((event) => { + for (const { oldUri, newUri } of event.files) { + if (this.isFileUri(oldUri)) { + if (this.isFileUri(newUri)) { + this.rename(oldUri, newUri); + } else { + // The file was renamed to a non-file URI, so we should remove it. + this.remove(oldUri); + } + } + } + this.fireWithDebounce(); + }); + + context.subscriptions.push( + editorWatcher, + selectionWatcher, + closeWatcher, + deleteWatcher, + renameWatcher, + ); + + // Just add current active file on start-up. + if ( + vscode.window.activeTextEditor && + this.isFileUri(vscode.window.activeTextEditor.document.uri) + ) { + this.addOrMoveToFront(vscode.window.activeTextEditor); + } + } + + private isFileUri(uri: vscode.Uri): boolean { + return uri.scheme === 'file'; + } + + private addOrMoveToFront(editor: vscode.TextEditor) { + // Deactivate previous active file + const currentActive = this.openFiles.find((f) => f.isActive); + if (currentActive) { + currentActive.isActive = false; + currentActive.cursor = undefined; + currentActive.selectedText = undefined; + } + + // Remove if it exists + const index = this.openFiles.findIndex( + (f) => f.path === editor.document.uri.fsPath, + ); + if (index !== -1) { + this.openFiles.splice(index, 1); + } + + // Add to the front as active + this.openFiles.unshift({ + path: editor.document.uri.fsPath, + timestamp: Date.now(), + isActive: true, + }); + + // Enforce max length + if (this.openFiles.length > MAX_FILES) { + this.openFiles.pop(); + } + + this.updateActiveContext(editor); + } + + private remove(uri: vscode.Uri) { + const index = this.openFiles.findIndex((f) => f.path === uri.fsPath); + if (index !== -1) { + this.openFiles.splice(index, 1); + } + } + + private rename(oldUri: vscode.Uri, newUri: vscode.Uri) { + const index = this.openFiles.findIndex((f) => f.path === oldUri.fsPath); + if (index !== -1) { + this.openFiles[index].path = newUri.fsPath; + } + } + + private updateActiveContext(editor: vscode.TextEditor) { + const file = this.openFiles.find( + (f) => f.path === editor.document.uri.fsPath, + ); + if (!file || !file.isActive) { + return; + } + + file.cursor = editor.selection.active + ? { + line: editor.selection.active.line + 1, + character: editor.selection.active.character, + } + : undefined; + + let selectedText: string | undefined = + editor.document.getText(editor.selection) || undefined; + if (selectedText && selectedText.length > MAX_SELECTED_TEXT_LENGTH) { + selectedText = + selectedText.substring(0, MAX_SELECTED_TEXT_LENGTH) + '... [TRUNCATED]'; + } + file.selectedText = selectedText; + } + + private fireWithDebounce() { + if (this.debounceTimer) { + clearTimeout(this.debounceTimer); + } + this.debounceTimer = setTimeout(() => { + this.onDidChangeEmitter.fire(); + }, 50); // 50ms + } + + get state(): IdeContext { + return { + workspaceState: { + openFiles: [...this.openFiles], + }, + }; + } +} diff --git a/packages/vscode-ide-companion/src/recent-files-manager.test.ts b/packages/vscode-ide-companion/src/recent-files-manager.test.ts deleted file mode 100644 index 9d56a10d..00000000 --- a/packages/vscode-ide-companion/src/recent-files-manager.test.ts +++ /dev/null @@ -1,278 +0,0 @@ -/** - * @license - * Copyright 2025 Google LLC - * SPDX-License-Identifier: Apache-2.0 - */ - -import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'; -import * as vscode from 'vscode'; -import { - RecentFilesManager, - MAX_FILES, - MAX_FILE_AGE_MINUTES, -} from './recent-files-manager.js'; - -vi.mock('vscode', () => ({ - EventEmitter: vi.fn(() => { - const listeners: Array<(e: void) => unknown> = []; - return { - event: vi.fn((listener) => { - listeners.push(listener); - return { dispose: vi.fn() }; - }), - fire: vi.fn(() => { - listeners.forEach((listener) => listener(undefined)); - }), - dispose: vi.fn(), - }; - }), - window: { - onDidChangeActiveTextEditor: vi.fn(), - onDidChangeTextEditorSelection: vi.fn(), - }, - workspace: { - onDidDeleteFiles: vi.fn(), - onDidCloseTextDocument: vi.fn(), - onDidRenameFiles: vi.fn(), - }, - Uri: { - file: (path: string) => ({ - fsPath: path, - scheme: 'file', - }), - }, -})); - -describe('RecentFilesManager', () => { - let context: vscode.ExtensionContext; - let onDidChangeActiveTextEditorListener: ( - editor: vscode.TextEditor | undefined, - ) => void; - let onDidDeleteFilesListener: (e: vscode.FileDeleteEvent) => void; - let onDidCloseTextDocumentListener: (doc: vscode.TextDocument) => void; - let onDidRenameFilesListener: (e: vscode.FileRenameEvent) => void; - - beforeEach(() => { - vi.useFakeTimers(); - - vi.mocked(vscode.window.onDidChangeActiveTextEditor).mockImplementation( - (listener) => { - onDidChangeActiveTextEditorListener = listener; - return { dispose: vi.fn() }; - }, - ); - vi.mocked(vscode.workspace.onDidDeleteFiles).mockImplementation( - (listener) => { - onDidDeleteFilesListener = listener; - return { dispose: vi.fn() }; - }, - ); - vi.mocked(vscode.workspace.onDidCloseTextDocument).mockImplementation( - (listener) => { - onDidCloseTextDocumentListener = listener; - return { dispose: vi.fn() }; - }, - ); - vi.mocked(vscode.workspace.onDidRenameFiles).mockImplementation( - (listener) => { - onDidRenameFilesListener = listener; - return { dispose: vi.fn() }; - }, - ); - - context = { - subscriptions: [], - } as unknown as vscode.ExtensionContext; - }); - - afterEach(() => { - vi.restoreAllMocks(); - vi.useRealTimers(); - }); - - const getUri = (path: string) => - vscode.Uri.file(path) as unknown as vscode.Uri; - - it('adds a file to the list', async () => { - const manager = new RecentFilesManager(context); - const uri = getUri('/test/file1.txt'); - manager.add(uri); - await vi.advanceTimersByTimeAsync(100); - expect(manager.recentFiles).toHaveLength(1); - expect(manager.recentFiles[0].filePath).toBe('/test/file1.txt'); - }); - - it('moves an existing file to the top', async () => { - const manager = new RecentFilesManager(context); - const uri1 = getUri('/test/file1.txt'); - const uri2 = getUri('/test/file2.txt'); - manager.add(uri1); - manager.add(uri2); - manager.add(uri1); - await vi.advanceTimersByTimeAsync(100); - expect(manager.recentFiles).toHaveLength(2); - expect(manager.recentFiles[0].filePath).toBe('/test/file1.txt'); - }); - - it('does not exceed the max number of files', async () => { - const manager = new RecentFilesManager(context); - for (let i = 0; i < MAX_FILES + 5; i++) { - const uri = getUri(`/test/file${i}.txt`); - manager.add(uri); - } - await vi.advanceTimersByTimeAsync(100); - expect(manager.recentFiles).toHaveLength(MAX_FILES); - expect(manager.recentFiles[0].filePath).toBe( - `/test/file${MAX_FILES + 4}.txt`, - ); - expect(manager.recentFiles[MAX_FILES - 1].filePath).toBe(`/test/file5.txt`); - }); - - it('fires onDidChange when a file is added', async () => { - const manager = new RecentFilesManager(context); - const onDidChangeSpy = vi.fn(); - manager.onDidChange(onDidChangeSpy); - - const uri = getUri('/test/file1.txt'); - manager.add(uri); - - await vi.advanceTimersByTimeAsync(100); - expect(onDidChangeSpy).toHaveBeenCalled(); - }); - - it('removes a file when it is closed', async () => { - const manager = new RecentFilesManager(context); - const uri = getUri('/test/file1.txt'); - manager.add(uri); - await vi.advanceTimersByTimeAsync(100); - expect(manager.recentFiles).toHaveLength(1); - - onDidCloseTextDocumentListener({ uri } as vscode.TextDocument); - await vi.advanceTimersByTimeAsync(100); - - expect(manager.recentFiles).toHaveLength(0); - }); - - it('fires onDidChange when a file is removed', async () => { - const manager = new RecentFilesManager(context); - const uri = getUri('/test/file1.txt'); - manager.add(uri); - await vi.advanceTimersByTimeAsync(100); - - const onDidChangeSpy = vi.fn(); - manager.onDidChange(onDidChangeSpy); - - onDidCloseTextDocumentListener({ uri } as vscode.TextDocument); - await vi.advanceTimersByTimeAsync(100); - - expect(onDidChangeSpy).toHaveBeenCalled(); - }); - - it('removes a file when it is deleted', async () => { - const manager = new RecentFilesManager(context); - const uri1 = getUri('/test/file1.txt'); - const uri2 = getUri('/test/file2.txt'); - manager.add(uri1); - manager.add(uri2); - await vi.advanceTimersByTimeAsync(100); - expect(manager.recentFiles).toHaveLength(2); - - onDidDeleteFilesListener({ files: [uri1] }); - await vi.advanceTimersByTimeAsync(100); - - expect(manager.recentFiles).toHaveLength(1); - expect(manager.recentFiles[0].filePath).toBe('/test/file2.txt'); - }); - - it('fires onDidChange when a file is deleted', async () => { - const manager = new RecentFilesManager(context); - const uri = getUri('/test/file1.txt'); - manager.add(uri); - await vi.advanceTimersByTimeAsync(100); - - const onDidChangeSpy = vi.fn(); - manager.onDidChange(onDidChangeSpy); - - onDidDeleteFilesListener({ files: [uri] }); - await vi.advanceTimersByTimeAsync(100); - - expect(onDidChangeSpy).toHaveBeenCalled(); - }); - - it('removes multiple files when they are deleted', async () => { - const manager = new RecentFilesManager(context); - const uri1 = getUri('/test/file1.txt'); - const uri2 = getUri('/test/file2.txt'); - const uri3 = getUri('/test/file3.txt'); - manager.add(uri1); - manager.add(uri2); - manager.add(uri3); - await vi.advanceTimersByTimeAsync(100); - expect(manager.recentFiles).toHaveLength(3); - - onDidDeleteFilesListener({ files: [uri1, uri3] }); - await vi.advanceTimersByTimeAsync(100); - - expect(manager.recentFiles).toHaveLength(1); - expect(manager.recentFiles[0].filePath).toBe('/test/file2.txt'); - }); - - it('prunes files older than the max age', () => { - const manager = new RecentFilesManager(context); - const uri1 = getUri('/test/file1.txt'); - manager.add(uri1); - - // Advance time by more than the max age - const twoMinutesMs = (MAX_FILE_AGE_MINUTES + 1) * 60 * 1000; - vi.advanceTimersByTime(twoMinutesMs); - - const uri2 = getUri('/test/file2.txt'); - manager.add(uri2); - - expect(manager.recentFiles).toHaveLength(1); - expect(manager.recentFiles[0].filePath).toBe('/test/file2.txt'); - }); - - it('fires onDidChange only once when adding an existing file', async () => { - const manager = new RecentFilesManager(context); - const uri = getUri('/test/file1.txt'); - manager.add(uri); - await vi.advanceTimersByTimeAsync(100); - - const onDidChangeSpy = vi.fn(); - manager.onDidChange(onDidChangeSpy); - - manager.add(uri); - await vi.advanceTimersByTimeAsync(100); - expect(onDidChangeSpy).toHaveBeenCalledTimes(1); - }); - - it('updates the file when it is renamed', async () => { - const manager = new RecentFilesManager(context); - const oldUri = getUri('/test/file1.txt'); - const newUri = getUri('/test/file2.txt'); - manager.add(oldUri); - await vi.advanceTimersByTimeAsync(100); - expect(manager.recentFiles).toHaveLength(1); - expect(manager.recentFiles[0].filePath).toBe('/test/file1.txt'); - - onDidRenameFilesListener({ files: [{ oldUri, newUri }] }); - await vi.advanceTimersByTimeAsync(100); - - expect(manager.recentFiles).toHaveLength(1); - expect(manager.recentFiles[0].filePath).toBe('/test/file2.txt'); - }); - - it('adds a file when the active editor changes', async () => { - const manager = new RecentFilesManager(context); - const uri = getUri('/test/file1.txt'); - - onDidChangeActiveTextEditorListener({ - document: { uri }, - } as vscode.TextEditor); - await vi.advanceTimersByTimeAsync(100); - - expect(manager.recentFiles).toHaveLength(1); - expect(manager.recentFiles[0].filePath).toBe('/test/file1.txt'); - }); -}); diff --git a/packages/vscode-ide-companion/src/recent-files-manager.ts b/packages/vscode-ide-companion/src/recent-files-manager.ts deleted file mode 100644 index 317cc903..00000000 --- a/packages/vscode-ide-companion/src/recent-files-manager.ts +++ /dev/null @@ -1,111 +0,0 @@ -/** - * @license - * Copyright 2025 Google LLC - * SPDX-License-Identifier: Apache-2.0 - */ - -import * as vscode from 'vscode'; - -export const MAX_FILES = 10; -export const MAX_FILE_AGE_MINUTES = 5; - -interface RecentFile { - uri: vscode.Uri; - timestamp: number; -} - -/** - * Keeps track of the 10 most recently-opened files - * opened less than 5 min ago. If a file is closed or deleted, - * it will be removed. If the max length is reached, older files will get removed first. - */ -export class RecentFilesManager { - private readonly files: RecentFile[] = []; - private readonly onDidChangeEmitter = new vscode.EventEmitter(); - readonly onDidChange = this.onDidChangeEmitter.event; - private debounceTimer: NodeJS.Timeout | undefined; - - constructor(private readonly context: vscode.ExtensionContext) { - const editorWatcher = vscode.window.onDidChangeActiveTextEditor( - (editor) => { - if (editor) { - this.add(editor.document.uri); - } - }, - ); - const deleteWatcher = vscode.workspace.onDidDeleteFiles((event) => { - for (const uri of event.files) { - this.remove(uri); - } - }); - const closeWatcher = vscode.workspace.onDidCloseTextDocument((document) => { - this.remove(document.uri); - }); - const renameWatcher = vscode.workspace.onDidRenameFiles((event) => { - for (const { oldUri, newUri } of event.files) { - this.remove(oldUri, false); - this.add(newUri); - } - }); - - const selectionWatcher = vscode.window.onDidChangeTextEditorSelection( - () => { - this.fireWithDebounce(); - }, - ); - - context.subscriptions.push( - editorWatcher, - deleteWatcher, - closeWatcher, - renameWatcher, - selectionWatcher, - ); - } - - private fireWithDebounce() { - if (this.debounceTimer) { - clearTimeout(this.debounceTimer); - } - this.debounceTimer = setTimeout(() => { - this.onDidChangeEmitter.fire(); - }, 50); // 50ms - } - - private remove(uri: vscode.Uri, fireEvent = true) { - const index = this.files.findIndex( - (file) => file.uri.fsPath === uri.fsPath, - ); - if (index !== -1) { - this.files.splice(index, 1); - if (fireEvent) { - this.fireWithDebounce(); - } - } - } - - add(uri: vscode.Uri) { - if (uri.scheme !== 'file') { - return; - } - - this.remove(uri, false); - this.files.unshift({ uri, timestamp: Date.now() }); - - if (this.files.length > MAX_FILES) { - this.files.pop(); - } - this.fireWithDebounce(); - } - - get recentFiles(): Array<{ filePath: string; timestamp: number }> { - const now = Date.now(); - const maxAgeInMs = MAX_FILE_AGE_MINUTES * 60 * 1000; - return this.files - .filter((file) => now - file.timestamp < maxAgeInMs) - .map((file) => ({ - filePath: file.uri.fsPath, - timestamp: file.timestamp, - })); - } -} diff --git a/scripts/test-windows-paths.js b/scripts/test-windows-paths.js new file mode 100644 index 00000000..d25d29c2 --- /dev/null +++ b/scripts/test-windows-paths.js @@ -0,0 +1,51 @@ +/** + * @license + * Copyright 2025 Google LLC + * SPDX-License-Identifier: Apache-2.0 + */ + +import path from 'path'; +import { fileURLToPath } from 'url'; + +// Test how paths are normalized +function testPathNormalization() { + // Use platform-agnostic path construction instead of hardcoded paths + const testPath = path.join('test', 'project', 'src', 'file.md'); + const absoluteTestPath = path.resolve('test', 'project', 'src', 'file.md'); + + console.log('Testing path normalization:'); + console.log('Relative path:', testPath); + console.log('Absolute path:', absoluteTestPath); + + // Test path.join with different segments + const joinedPath = path.join('test', 'project', 'src', 'file.md'); + console.log('Joined path:', joinedPath); + + // Test path.normalize + console.log('Normalized relative path:', path.normalize(testPath)); + console.log('Normalized absolute path:', path.normalize(absoluteTestPath)); + + // Test how the test would see these paths + const testContent = `--- File: ${absoluteTestPath} ---\nContent\n--- End of File: ${absoluteTestPath} ---`; + console.log('\nTest content with platform-agnostic paths:'); + console.log(testContent); + + // Try to match with different patterns + const marker = `--- File: ${absoluteTestPath} ---`; + console.log('\nTrying to match:', marker); + console.log('Direct match:', testContent.includes(marker)); + + // Test with normalized path in marker + const normalizedMarker = `--- File: ${path.normalize(absoluteTestPath)} ---`; + console.log( + 'Normalized marker match:', + testContent.includes(normalizedMarker), + ); + + // Test path resolution + const __filename = fileURLToPath(import.meta.url); + console.log('\nCurrent file path:', __filename); + console.log('Directory name:', path.dirname(__filename)); +} + +testPathNormalization(); diff --git a/scripts/version.js b/scripts/version.js index 5d85eb80..692a2135 100644 --- a/scripts/version.js +++ b/scripts/version.js @@ -39,15 +39,28 @@ const npmVersionArg = isSpecificVersion ? versionArg : versionArg; // 3. Bump the version in the root and all workspace package.json files. run(`npm version ${npmVersionArg} --no-git-tag-version --allow-same-version`); -run( - `npm version ${npmVersionArg} --workspaces --no-git-tag-version --allow-same-version`, + +// 4. Get all workspaces and filter out the one we don't want to version. +const workspacesToExclude = ['qwen-code-vscode-ide-companion']; +const lsOutput = JSON.parse( + execSync('npm ls --workspaces --json --depth=0').toString(), +); +const allWorkspaces = Object.keys(lsOutput.dependencies || {}); +const workspacesToVersion = allWorkspaces.filter( + (wsName) => !workspacesToExclude.includes(wsName), ); -// 3. Get the new version number from the root package.json +for (const workspaceName of workspacesToVersion) { + run( + `npm version ${npmVersionArg} --workspace ${workspaceName} --no-git-tag-version --allow-same-version`, + ); +} + +// 5. Get the new version number from the root package.json const rootPackageJsonPath = resolve(process.cwd(), 'package.json'); const newVersion = readJson(rootPackageJsonPath).version; -// 4. Update the sandboxImageUri in the root package.json +// 6. Update the sandboxImageUri in the root package.json const rootPackageJson = readJson(rootPackageJsonPath); if (rootPackageJson.config?.sandboxImageUri) { rootPackageJson.config.sandboxImageUri = @@ -56,7 +69,7 @@ if (rootPackageJson.config?.sandboxImageUri) { writeJson(rootPackageJsonPath, rootPackageJson); } -// 5. Update the sandboxImageUri in the cli package.json +// 7. Update the sandboxImageUri in the cli package.json const cliPackageJsonPath = resolve(process.cwd(), 'packages/cli/package.json'); const cliPackageJson = readJson(cliPackageJsonPath); if (cliPackageJson.config?.sandboxImageUri) { @@ -68,7 +81,7 @@ if (cliPackageJson.config?.sandboxImageUri) { writeJson(cliPackageJsonPath, cliPackageJson); } -// 6. Run `npm install` to update package-lock.json. +// 8. Run `npm install` to update package-lock.json. run('npm install'); console.log(`Successfully bumped versions to v${newVersion}.`);