Compare commits

...

439 Commits

Author SHA1 Message Date
github-actions[bot]
cac3202945 chore(release): v0.0.8-nightly.5 2025-08-20 08:32:02 +00:00
tanzhenxin
213a017539 feat: add automated release notes generation with previous tag detection 2025-08-20 16:17:49 +08:00
pomelo
8833a112a9 Merge pull request #364 from QwenLM/chore/sync-gemini-cli-v0.1.19
Sync upstream gemini-cli v0.1.19
2025-08-20 16:04:47 +08:00
pomelo
d933795d8e docs: Update security policy with Alibaba contact information (#390) 2025-08-20 15:30:31 +08:00
tanzhenxin
c8f3b15971 chore: npm run format 2025-08-20 11:28:18 +08:00
tanzhenxin
2fcacb70b9 chore: fix ide installer 2025-08-20 11:25:11 +08:00
tanzhenxin
a0a1d6e253 chore: fix build issue 2025-08-20 11:11:18 +08:00
tanzhenxin
0de3236076 Merge branch 'main' into chore/sync-gemini-cli-v0.1.19 2025-08-20 10:45:42 +08:00
tanzhenxin
303b6999f4 fix: qwen vscode extension 2025-08-19 18:20:40 +08:00
pomelo
93f5e59710 Merge pull request #171 from dowithless/patch-1
doc: Add links to translated README versions
2025-08-19 15:13:42 +08:00
Fan
7b378e826c feat: project/global save location option (#368) 2025-08-18 23:09:50 +08:00
tanzhenxin
a19db16485 chore: npm run format 2025-08-18 20:01:54 +08:00
tanzhenxin
7dbc240847 chore: sync gemini-cli v0.1.19 2025-08-18 19:55:46 +08:00
thuan1412
5e70b34041 feat: use .geminiignore in grep tool (#349)
* feat: use .geminiignore in grep tool
2025-08-18 11:37:26 +08:00
tanzhenxin
df1479f864 Chore/release 0.0.7 (#343)
* chore: pump version to 0.0.7 and add changelog.md
2025-08-15 18:49:13 +08:00
Mingholy
14e6d3c01e Update qwen-code-pr-review.yml
Trigger Qwen PR Review when a PR opens.
Fix the auto-skip issue.
2025-08-15 18:24:43 +08:00
pomelo
da0b8b5534 Merge pull request #340 from QwenLM/feat/web_fetch_tool
feat: refactor web-fetch tool to remove google genai dependency
2025-08-15 18:10:32 +08:00
tanzhenxin
e1d502991d chore: remove https restricton 2025-08-15 17:58:05 +08:00
tanzhenxin
7e01554b9c chore: fix test case failure 2025-08-15 17:27:09 +08:00
tanzhenxin
36c65658ff chore: npm run lint 2025-08-15 17:16:05 +08:00
tanzhenxin
a925ac56fa Potential fix for code scanning alert no. 24: Incomplete URL substring sanitization
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2025-08-15 17:10:20 +08:00
tanzhenxin
5d4a9452d8 feat: refactor web-fetch tool to remove google genai dependency 2025-08-15 17:06:00 +08:00
tanzhenxin
3e082ae89a feat: replace google web search with tavily web search (#329) 2025-08-14 21:20:23 +08:00
Fan
51207043d0 fix: custom API's trailing space and empty tool id issues (#326)
* fix: generate random tool call id when serving API does not have one

* tmp
2025-08-14 21:18:52 +08:00
Mingholy
2403061bab fix: OpenAI tools (#328)
- MCP tool params schema lost causing all MCP not working well
- Compatible with occasional llm return tool call parameters that are invalid json
2025-08-14 21:18:26 +08:00
Mingholy
1ffcb51052 fix: seperate static QR code and dynamic spin components (#327)
* fix: seperate static QR code and dynamic spin components

* fix: format issues
2025-08-14 21:17:56 +08:00
tanzhenxin
c33d162ff2 Merge pull request #325 from QwenLM/fix/max_listeners_warning
fix: qwen logger exit handler setup
2025-08-14 20:06:14 +08:00
tanzhenxin
bbfe94cfe2 chore: npm run format 2025-08-14 18:54:24 +08:00
pomelo
03c7b1836f Merge pull request #323 from QwenLM/release/0.0.6
chore: bump version to 0.0.6
2025-08-14 18:54:04 +08:00
pomelo
f2ba6dbb8a Merge pull request #322 from QwenLM/fix/concurrent_requests
feat: prevent concurrent query submissions in useGeminiStream hook
2025-08-14 18:53:19 +08:00
tanzhenxin
2d0884b04d fix: qwen logger exit handler setup 2025-08-14 18:08:14 +08:00
tanzhenxin
fc70439355 chore: bump version to 0.0.6 2025-08-14 16:52:39 +08:00
tanzhenxin
0265b67b90 chore: npm run format & lint 2025-08-14 16:48:51 +08:00
tanzhenxin
1f91b9ece1 Merge pull request #309 from QwenLM/chore/sync-gemini-cli-v0.1.18
Sync with upstream gemini-cli v0.1.18
2025-08-14 16:47:31 +08:00
tanzhenxin
c58106079e feat: prevent concurrent query submissions in useGeminiStream hook 2025-08-14 16:39:26 +08:00
pomelo
6516d0d136 Merge pull request #313 from QwenLM/chore/log_api_request
chore: add api request logger
2025-08-13 18:54:02 +08:00
tanzhenxin
5369af61d2 chore: add api request logger 2025-08-13 18:51:40 +08:00
pomelo
6a4005cace Merge pull request #262 from nguu0123/main
feat(sandbox): add GHA to build sandbox image
2025-08-13 18:37:39 +08:00
tanzhenxin
290ccdbe21 chore: stick openai sdk version to 5.11.0 2025-08-13 16:10:04 +08:00
tanzhenxin
b5514fd052 chore: fix invalid package deps 2025-08-13 16:00:26 +08:00
tanzhenxin
bc92da04e9 Merge tag 'v0.1.18' of https://github.com/google-gemini/gemini-cli into chore/sync-gemini-cli-v0.1.18 2025-08-13 15:11:10 +08:00
gemini-cli-robot
5349c4d02b chore(release): v0.1.19 2025-08-12 18:21:15 +00:00
Shreya Keshive
67c6033147 Bump version number of companion extension to match next Gemini CLI version number (#6065) 2025-08-12 18:03:55 +00:00
owenofbrien
5d1d40fa2e Fix: log api response error status codes (#6015)
Co-authored-by: Gaurav <39389231+gsquared94@users.noreply.github.com>
2025-08-12 16:51:21 +00:00
Jacob Richman
804c181ac4 chore(integration-tests): refactor to typescript (#5645) 2025-08-12 16:19:09 +00:00
tanzhenxin
0bc45aeefe chore: build issue, fallback to 0.0.5 version 2025-08-12 21:58:12 +08:00
tanzhenxin
7856f52afb Merge pull request #298 from QwenLM/chore/pkg_version
Chore/pkg version
2025-08-12 21:12:07 +08:00
tanzhenxin
e986476fe0 chore: pump version to 0.0.6 2025-08-12 21:03:25 +08:00
tanzhenxin
cfc1aebee6 chore: use correct CLI_VERSION for logging 2025-08-12 21:00:17 +08:00
pomelo
ef1c8a4bfe Merge pull request #293 from Clarence-pan/main
fix: 🐛 fix EPERM error when run `qwen --sandbox` in macOS
2025-08-12 19:53:59 +08:00
tanzhenxin
484292b2ac Merge pull request #297 from QwenLM/chore/pr-review-action
chore: adjust workflow to run PR review
2025-08-12 19:35:07 +08:00
mingholy.lmh
f9659184d4 chore: adjust workflow to run PR review 2025-08-12 18:05:52 +08:00
tanzhenxin
6d5bb1b57c Merge pull request #284 from QwenLM/feat/usage_stats_logging
feat: add usage statistics logging for Qwen integration
2025-08-12 17:56:34 +08:00
tanzhenxin
fb9f2d292c Merge pull request #274 from QwenLM/feat/memory_tool_docs
Make `/init` respect configured context filename and align docs with QWEN.md
2025-08-12 17:56:16 +08:00
Clarence-pan
16ea8560b7 fix: 🐛 fix EPERM error when run qwen --sandbox in macOS 2025-08-12 15:04:01 +08:00
tanzhenxin
2655af079a chore: npm run format & lint 2025-08-12 12:21:39 +08:00
tanzhenxin
807844fb57 feat: implement usage stats logging with telemetry refactoring 2025-08-12 12:16:38 +08:00
JAYADITYA
2d1a6af890 feat(cli): support single Ctrl+C to cancel streaming, preserving double Ctrl+C to exit (#5838) 2025-08-12 04:13:57 +00:00
Ali Al Jufairi
f9efb2e24f docs(commands): add /settings command for user-friendly settings editing (#5984)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-12 04:08:35 +00:00
mingholy.lmh
2202d26ac7 fix: add comment 2025-08-12 11:58:58 +08:00
mingholy.lmh
58f66ccfc6 fix: openaiContentGenerator
- remove `metadata` when using unspported models/providers
- use `qwen3-code-plus` as default, fix picking wrong model when refresh auth
2025-08-12 11:58:58 +08:00
mingholy.lmh
65c622c0ac test: tweak test cases 2025-08-12 11:58:41 +08:00
mingholy.lmh
a3ec2f52c9 fix: terminal flicker when waiting for login 2025-08-12 11:58:41 +08:00
Seth Vargo
d8fec54e81 feat(/setup-github): Use node to download the files (#5863) 2025-08-12 01:32:23 +00:00
Sandy Tao
26fe587b44 skip loop check if it is currently inside a loop (#6022) 2025-08-11 23:45:31 +00:00
Wanlin Du
d9fb08c9da feat: migrate tools to use parametersJsonSchema. (#5330) 2025-08-11 23:12:41 +00:00
Wanlin Du
f52d073dfb chore: migrate from responseSchema to use responseJsonSchema. (#4814) 2025-08-11 23:04:58 +00:00
Shreya Keshive
c7fd4c4a96 Start IDE connection after config initialization (#6018) 2025-08-11 22:09:47 +00:00
christine betts
94b6199943 [ide-mode] Update sandbox detection logic to support macos seatbelt (#6005)
Co-authored-by: matt korwel <matt.korwel@gmail.com>
2025-08-11 21:09:57 +00:00
christine betts
c0f5f6a5f6 [ide-mode] Update handling of workspace paths (#6014)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-11 21:06:01 +00:00
christine betts
0e98641b51 Add support for VSCode-like editors (#5699)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-11 21:01:37 +00:00
Shreya Keshive
4656f17524 Reduce noisy IDE integration error message in standalone terminal (#6006) 2025-08-11 19:57:56 +00:00
cornmander
110e00178b Add --experimental-cli to speed up prettier formatting. (#5999)
Co-authored-by: Seth Troisi <sethtroisi@google.com>
Co-authored-by: matt korwel <matt.korwel@gmail.com>
2025-08-11 19:09:12 +00:00
Sijie Wang
72832fb889 Fix line end bugs in Vim mode (#5328)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-11 18:58:32 +00:00
Gaurav
6390b81646 update: issue triage workflows tags duplicate issues (#5868) 2025-08-11 18:48:57 +00:00
Tommaso Sciortino
239ba63d28 Make ProjectIdRequiredError error more lenient (#5693) 2025-08-11 18:04:44 +00:00
Jacob Richman
2269f8a1a8 Modify content generated describing the ide context to only include deltas after the initial update (#5880) 2025-08-11 17:15:44 +00:00
cornmander
aa5c80dec4 feat(core): add host validation to GoogleCredentialProvider (#5962)
Co-authored-by: Brian Ray <62354532+emeryray2002@users.noreply.github.com>
2025-08-11 16:40:30 +00:00
Shreya Keshive
b0b12af2ce Additional IDE integration polishes (#5985) 2025-08-11 16:27:45 +00:00
christine betts
8dd6f04199 Show IDE diff options in both panes (#5986) 2025-08-11 16:13:45 +00:00
Lee James
2548facc79 feat: add "surface" to all logs (#5862) 2025-08-11 15:11:20 +00:00
tanzhenxin
c96852dc56 feat: add usage statistics logging for Qwen integration 2025-08-11 22:13:56 +08:00
tanzhenxin
028a82ebeb chore: format&lint 2025-08-11 12:01:43 +08:00
tanzhenxin
6b67cd1b57 make /init respect configured context filename and align docs with QWEN.md 2025-08-11 11:56:12 +08:00
tanzhenxin
96a9b683b2 Merge pull request #266 from mahone3297/fix-readme-status-command
Fix README.md: Replace /status command with /stats command in documen…
2025-08-11 09:54:03 +08:00
tanzhenxin
dcc86699cf Merge pull request #235 from AstroAir/main
rename GEMINI.md to QWEN.md across the codebase
2025-08-11 09:51:01 +08:00
Gal Zahavi
2865a52778 docs(config): Add showLineNumbers option to documentation (#5947) 2025-08-10 19:06:35 +00:00
Dmitry Lyalin
0e44bbc85d docs(readme): Overhaul for clarity and user experience (#5732)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Srinath Padmanabhan <17151014+srithreepo@users.noreply.github.com>
2025-08-10 18:52:14 +00:00
mahone3297
964509f587 Fix README.md: Replace /status command with /stats command in documentation
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2025-08-10 21:54:26 +08:00
nguu0123
a37423bf7f Update name of the workflow 2025-08-10 13:40:24 +03:00
nguu0123
bfcb3e7f1d Remove redundant Node.js setup and build steps from Docker workflow 2025-08-10 13:23:49 +03:00
nguu0123
1a581ed191 Limit docker image workflow to tags and manual triggers 2025-08-10 13:17:34 +03:00
nguu0123
5c94913643 Refactor Dockerfile with multi-stage build for smaller image size 2025-08-10 13:00:46 +03:00
nguu0123
e221b077e5 Fix gha version 2025-08-10 12:26:51 +03:00
nguu0123
0f58b3fd32 fix qemu gha version 2025-08-10 12:24:18 +03:00
nguu0123
32d06b2fc1 Add publish image gha 2025-08-10 12:20:22 +03:00
Ali Al Jufairi
0157eae3d7 fix(settings): enable default usage statistics collection (#5909) 2025-08-10 02:56:53 +00:00
AstroAir
e3a5806ae2 fix: simplify mock return values for QWEN.md in App tests 2025-08-10 10:19:47 +08:00
Max Qian
a45adbdc76 Merge branch 'QwenLM:main' into main 2025-08-10 08:36:39 +08:00
Ali Al Jufairi
8a9a927544 feat(ui): add /settings command and UI panel (#4738)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-10 00:04:52 +00:00
Lee Won Jun
c632ec8b03 [#5356] Minor fix: Remove duplicate binding and add complete navigation command (#5884)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-09 22:28:28 +00:00
fuyou
0dea7233b6 feat(cli) - enhance input UX with double ESC clear (#4453)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-09 22:26:43 +00:00
Yuki Okita
34434cd4aa feat: drop load-memory-from-include-directories option from cli args (#5866) 2025-08-09 19:50:53 +00:00
tanzhenxin
41500814b0 Merge pull request #242 from nipeharefa/rename-make-npx
fix: rename make run-npx from gemini to qwen
2025-08-09 22:26:14 +08:00
Nipe Setiawan Harefa
786832913b fix: rename make run-npx from gemini to qwen 2025-08-09 16:58:09 +07:00
JAYADITYA
6b19c8bd55 feat: add humorous tip for new line shortcut in Gemini CLI (#5666)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-09 07:24:21 +00:00
Lee Won Jun
b8084ba815 Centralize Key Binding Logic and Refactor (Reopen) (#5356)
Co-authored-by: Lee-WonJun <10369528+Lee-WonJun@users.noreply.github.com>
2025-08-09 07:03:17 +00:00
Hiroaki Mitsuyoshi
6487cc1689 feat(chat): Add overwrite confirmation dialog to /chat save (#5686)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-09 06:59:22 +00:00
Brian de Alwis
191cc01bf5 fix(core): restrict oauth_creds.json file permissions (#5245)
Co-authored-by: cornmander <shikhman@google.com>
2025-08-09 03:05:30 +00:00
AstroAir
4807434d9f refactor: rename GEMINI.md to QWEN.md across the codebase 2025-08-09 10:33:02 +08:00
N. Taylor Mullen
c184ec3224 chore(release): v0.1.18 (#5864) 2025-08-08 17:26:43 -07:00
Jacob MacDonald
f35921a771 Add MCP Roots support (#5856)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-08 23:29:06 +00:00
Gal Zahavi
c03ae43777 feat: Add option to hide line numbers in code blocks (#5857)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-08 22:11:14 +00:00
Jacob MacDonald
69322e12e4 Add a request queue to the tool scheduler (#5845) 2025-08-08 21:50:35 +00:00
Shreya Keshive
9ac62565a0 Fix excessive console logging + remove unnecessary try catch (#5860) 2025-08-08 21:48:02 +00:00
Shreya Keshive
344ee29f77 Use slash command instead of context drawer to display open files in editor to reduce flickering in the UI (#5858) 2025-08-08 21:26:11 +00:00
shishu314
60bde58f29 fix(cli) - Adding logging for response and error in LoggingContentGenerator (#5842)
Co-authored-by: Shi Shu <shii@google.com>
2025-08-08 19:58:33 +00:00
shrutip90
34b5dc7f28 Add FolderTrustDialog that shows on launch and enables folderTrust setting (#5815) 2025-08-08 18:02:27 +00:00
christine betts
3af4913ef3 [ide-mode] Close all open diffs when the CLI gets closed (#5792) 2025-08-08 15:38:30 +00:00
christine betts
5ec4ea9b4d [ide-mode] Wire up env variables to sandbox (#5804) 2025-08-08 15:35:47 +00:00
christine betts
407393b128 [ide-mode] Hide diff options when active diff is not focused (#5808)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-08 15:21:50 +00:00
Gal Zahavi
51d09e720b fix(core): Add missing mnemonist dependency (#5841) 2025-08-08 15:10:04 +00:00
pomelo
c09abb817f Merge pull request #227 from QwenLM/fix/remove-google-registry
chore: remove google registry
2025-08-08 20:51:42 +08:00
tanzhenxin
b7663950f2 chore: remove google registry 2025-08-08 20:45:54 +08:00
Akhil Appana
f5e0f16157 fix: properly report tool errors in telemetry (#5688)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-08 11:33:42 +00:00
tanzhenxin
8158e82165 Merge pull request #225 from QwenLM/feat/qwen-oauth
feat(oauth): add Qwen OAuth integration
2025-08-08 17:51:31 +08:00
mingholy.lmh
f8d3571e31 fix: switch baseUrl to prod 2025-08-08 16:31:17 +08:00
tanzhenxin
6f399c078a chore: format 2025-08-08 15:02:17 +08:00
tanzhenxin
854c452580 Merge branch 'feat/qwen-oauth' of https://github.com/QwenLM/qwen-code into feat/qwen-oauth 2025-08-08 14:57:26 +08:00
tanzhenxin
f503be14e9 chore: add metadata on openai content generator 2025-08-08 14:57:13 +08:00
mingholy.lmh
5d2a678cb2 docs: update README for qwen-oauth 2025-08-08 14:08:08 +08:00
agarwalravikant
5ab184fcaf Fix for git issue 5657 to add lines of code added/removed telemetry (#5823)
Co-authored-by: Ravikant Agarwal <ravikantag@google.com>
2025-08-08 04:38:07 +00:00
mingholy.lmh
ce632725b0 refactor: re-organize Qwen related code files.
Co-authored-by: tanzhenxin <tanzhenxing1987@gmail.com>
Co-authored-by: pomelo-nwu <czynwu@outlook.com>
2025-08-08 11:55:58 +08:00
mingholy.lmh
ea7dcf8347 feat(oauth): add Qwen OAuth integration 2025-08-08 10:30:18 +08:00
Gal Zahavi
86eaa03f8a feat(telemetry): Prevent memory leak in ClearcutLogger (#5734) 2025-08-08 01:53:39 +00:00
Jerop Kipruto
e50d886ba8 docs: Improve local telemetry example (#5818) 2025-08-08 01:17:19 +00:00
Sandy Tao
e8815ba43c feat(quality): Reset when seeing a new type of Markdown element (#5820) 2025-08-08 00:21:42 +00:00
shishu314
bae922a632 fix(cli) - Move logging into CodeAssistServer (#5781)
Co-authored-by: Shi Shu <shii@google.com>
2025-08-07 23:58:18 +00:00
laurentsimon
60362e0329 fix: MCP servers allowed in settings do not show up in /mcp command (#5324) 2025-08-07 23:42:17 +00:00
Jerop Kipruto
494a10e7a7 Add echo tool to automated triage workflow (#5809)
Co-authored-by: Gaurav <39389231+gsquared94@users.noreply.github.com>
2025-08-07 23:14:28 +00:00
Miguel Solorio
785ee5d59a Use semantic colors in themes (#5796)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-07 23:11:35 +00:00
Gal Zahavi
4f2974dbfe feat(ui): Improve UI layout adaptation for narrow terminals (#5651)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-07 22:55:53 +00:00
Richie Foreman
65e4b941ee chore(vscode): Add recommended extensions list to vscode settings. (#5810) 2025-08-07 22:54:00 +00:00
Richie Foreman
9bc0a4aff3 chore(telemetry): Log FIREBASE_STUDIO when using Gemini CLI within Firebase Studio (#5790) 2025-08-07 22:50:48 +00:00
Allen Hutchison
0c32a4061d fix(core): Replace flaky performance tests with robust correctness tests (#5795)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-07 22:38:21 +00:00
Bryant Chandler
9fc7115b86 perf(filesearch): Use async fzf for non-blocking file search (#5771)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-07 22:24:55 +00:00
Richie Foreman
c38147a3a6 chore(vscode settings): Update VsCode settings for quality-of-life (#5806)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-07 22:16:57 +00:00
Gaurav
908ce2be33 update: google-github-actions/run-gemini-cli version in workflows (#5802) 2025-08-07 21:57:54 +00:00
Shreya Keshive
f1663d9615 README + reduce required VS Code version for companion extension (#5719) 2025-08-07 21:25:06 +00:00
Shreya Keshive
4d4eacfc40 Few IDE integration polishes (#5727) 2025-08-07 21:19:31 +00:00
Jacob MacDonald
19491b7b94 avoid loading and initializing CLI config twice in non-interactive mode (#5793) 2025-08-07 21:19:06 +00:00
shrutip90
53f8617b24 Add new folderTrust setting that the users can enable or disable (#5798) 2025-08-07 21:06:17 +00:00
Adam Weidman
3a3b138195 Include Schema Error Handling for Vertex and Google Auth methods (#5780)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-07 20:21:39 +00:00
Pyush Sinha
8e6a565adb fix: re render context usage indicator (#5102) 2025-08-07 18:16:47 +00:00
anthony bushong
a3351bc985 fix(tests): add missing deps in cli to fix sandbox runs (#5742) 2025-08-07 17:58:44 +00:00
Shehab
7596481a9d feat(cli): Allow Exiting Authentication Menu with CTRL+C (SIGINT) (#4482)
Co-authored-by: Seth Troisi <sethtroisi@google.com>
2025-08-07 17:26:55 +00:00
joshualitt
8bac9e7d04 Migrate EditTool, GrepTool, and GlobTool to DeclarativeTool (#5744) 2025-08-07 17:05:37 +00:00
Sandy Tao
0d65baf928 Fix(core): Use Flash for next speaker check (#5786) 2025-08-07 16:18:53 +00:00
Lee James
8d848dca4a feat: open repo secrets page in addition to README (#5684) 2025-08-07 16:00:46 +00:00
Jacob MacDonald
6ae75c9f32 Add a context percentage threshold setting for auto compression (#5721) 2025-08-07 14:34:40 +00:00
Fan
ffc2d27ca3 feat: add qwencoder as co-author (#207)
* init

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

* fix shell tool regex pattern for git commit messages

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>

---------

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
2025-08-07 17:07:56 +08:00
Abhi
36750ca49b feat(agent): Introduce Foundational Subagent Architecture (#1805)
Co-authored-by: Colt McAnlis <colton@google.com>
2025-08-07 00:34:38 +00:00
Allen Hutchison
d6a7334279 fix(logging): Ensure sandbox startup messages are routed to stderr (#5725) 2025-08-07 00:19:10 +00:00
anthony bushong
99f88851fb fix(actions): swap gha bot for cla allowlisted gemini-cli-robot (#5730) 2025-08-07 00:01:22 +00:00
N. Taylor Mullen
01f7c4b740 Fix(tests): update mcp_server_cyclic_schema test (#5733) 2025-08-06 23:59:50 +00:00
DevMassive
9ac3e8b79e feat: Improve @-command file path completion with fzf integration (#5650)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-06 23:41:04 +00:00
Richie Foreman
4782113ceb fix(core): Improve errors in situations where the command spawn does … (#5723) 2025-08-06 23:31:42 +00:00
shrutip90
626844b539 experiment: Add feature exp flag for folder trust (#5709) 2025-08-06 22:27:21 +00:00
Seth Vargo
5cd63a6abc feat(cli): get the run-gemini-cli version from the GitHub API (#5708) 2025-08-06 20:56:06 +00:00
christine betts
b55467c1dd [ide-mode] Support rendering in-IDE diffs using the edit tool (#5618)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-06 20:55:29 +00:00
joshualitt
43510ed212 bug(core): Prompt engineering for truncated read_file. (#5161) 2025-08-06 20:52:04 +00:00
Jack Wotherspoon
ad5d2af4e3 tests: fix e2e tests (#5706) 2025-08-06 20:46:50 +00:00
Jacob MacDonald
e3e7677753 Add integration test for maximum schema depth error handling (#5685) 2025-08-06 20:45:54 +00:00
Jacob MacDonald
b3cfaeb6d3 Add detection of tools with bad schemas and automatically omit them with a warning (#5694) 2025-08-06 20:19:15 +00:00
Shreya Keshive
024b8207eb Add hint to enable IDE integration for users running in VS Code (#5610) 2025-08-06 19:47:58 +00:00
Lee James
1fb680bacc bug(tests): fix test errors (#5678)
Co-authored-by: matt korwel <matt.korwel@gmail.com>
2025-08-06 19:26:46 +00:00
shishu314
1f0ad86544 fix: Restore user input when the user cancels response (#5601)
Co-authored-by: Shi Shu <shii@google.com>
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-06 19:19:10 +00:00
joshualitt
6133bea388 feat(core): Introduce DeclarativeTool and ToolInvocation. (#5613) 2025-08-06 17:50:02 +00:00
agarwalravikant
882a97aff9 Fix to send user tool confirmation decision for yolo or non interacti… (#5677)
Co-authored-by: Ravikant Agarwal <ravikantag@google.com>
2025-08-06 17:46:42 +00:00
christine betts
fde9849d48 [ide-mode] Add support for in-IDE diff handling in the CLI (#5603)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-06 17:36:05 +00:00
Akhil Appana
487818df27 fix: improve error handling and path processing in memory discovery (#5175)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Allen Hutchison <adh@google.com>
2025-08-06 17:19:43 +00:00
Jack Wotherspoon
ca4c745e3b feat(mcp): add gemini mcp commands for add, remove and list (#5481) 2025-08-06 15:52:29 +00:00
Lee James
b38f377c9a feat: Enable /setup-github to always run, and error appropriately (#5653)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-06 13:06:37 +00:00
Yiheng Xu
f0c60b90ea Merge pull request #206 from feature/yiheng/sync-gemini-cli-0.1.17
sync gemini cli 0.1.17
2025-08-06 17:09:54 +08:00
Yiheng Xu
14a3be7976 fix generateJson with respond in schema
Co-Authored-By: Qwen-Coder <qwen-coder@alibabacloud.com>
2025-08-06 17:03:57 +08:00
Bryant Chandler
aab850668c feat(file-search): Add support for non-recursive file search (#5648)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-06 06:33:27 +00:00
Yash Velagapudi
8b1d5a2e3c fix(core): Treat .mts files as TypeScript modules instead of video files (#5492)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-06 06:15:53 +00:00
Gaurav
a0990380b5 fix:missing coreTool in new workflow setup (#5656) 2025-08-06 05:26:27 +00:00
Jerop Kipruto
2fcaa302da docs: add GitHub Integration section to README (#5649) 2025-08-06 04:01:42 +00:00
Lee James
7fa2d7be17 doc(lint): fix docs on how to run linter in "fix" mode (#5647) 2025-08-06 03:21:36 +00:00
Lee James
be3aabaea6 docs(setup-github): Inform user of the next steps after running slash command (#5644)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-06 02:59:25 +00:00
Gaurav
b87b436ebc refactor: use google-github-actions/run-gemini-cli action (#5643) 2025-08-06 02:24:40 +00:00
Jacob MacDonald
7e5a5e2da7 Detect and warn about cyclic tool refs when schema depth errors are encountered (#5609)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-06 01:48:00 +00:00
christine betts
9db5aab498 Update a couple more witty phrases (#5641) 2025-08-06 01:13:22 +00:00
Sandy Tao
390edb5e0a Add tests for useAtCompletion reset logic (#5639) 2025-08-06 01:10:29 +00:00
github-actions[bot]
ea96293e16 chore(release): v0.1.18 2025-08-06 00:58:42 +00:00
Jerop Kipruto
cd7e60e008 switch from heads to tags in url path (#5638) 2025-08-05 17:47:28 -07:00
Sandy Tao
59bde4a612 fix(core) Fix not resetting when after first get out of completion suggestions (#5635)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-06 00:37:44 +00:00
Bryan Morgan
02f7e48c51 Removed GitHub Actions experiment files (#5627) 2025-08-06 00:01:18 +00:00
christine betts
aeb6602266 Remove a few witty loading phrases (#5631)
Co-authored-by: matt korwel <matt.korwel@gmail.com>
2025-08-05 23:59:14 +00:00
David Rees
805114aef8 fix(docs): Fix code block delimiters in commands.md (#5521)
Co-authored-by: Sandy Tao <sandytao520@icloud.com>
2025-08-05 23:30:57 +00:00
Justin Mahood
91035ad7b0 Fix(vim): Fix shell mode in Vim mode (#5567)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-05 23:29:37 +00:00
Bryant Chandler
12a9bc3ed9 feat(core, cli): Introduce high-performance FileSearch engine (#5136)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-05 23:18:03 +00:00
Allen Hutchison
2141b39c3d feat(cli): route non-interactive output to stderr (#5624) 2025-08-05 23:11:21 +00:00
Shreya Keshive
268627469b Refactor IDE client state management, improve user-facing error messages, and add logging of connection events (#5591)
Co-authored-by: matt korwel <matt.korwel@gmail.com>
2025-08-05 22:52:58 +00:00
Jacob MacDonald
6a72cd064b check for the prompt capability before listing prompts from MCP servers (#5616)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
Co-authored-by: Sandy Tao <sandytao520@icloud.com>
2025-08-05 22:50:30 +00:00
sangwook
aebe3ace3c perf(core): implement parallel file processing for 74% performance improvement (#4763)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
Co-authored-by: Sandy Tao <sandytao520@icloud.com>
2025-08-05 22:47:18 +00:00
8bitmp3
c402784d97 Fix and improve Gemini CLI troubleshooting.md doc (#2734)
Co-authored-by: Sandy Tao <sandytao520@icloud.com>
2025-08-05 22:43:41 +00:00
William Thurston
bed6ab1cce fix(start): use absolute path to resolve CLI package (#3196)
Co-authored-by: Abhi <43648792+abhipatel12@users.noreply.github.com>
Co-authored-by: Sandy Tao <sandytao520@icloud.com>
2025-08-05 22:43:15 +00:00
xyizko
1b08a6c063 fix(minor): Grammar and Typos (#5053)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-05 22:41:27 +00:00
Sandy Tao
82fa7a0660 fix(format) Fix format for .github/workflows/weekly-velocity-report.yml (#5622) 2025-08-05 22:32:06 +00:00
Bryan Morgan
2e9236fab4 Update weekly-velocity-report.yml 2025-08-05 18:11:06 -04:00
Mikhail Aksenov
dadf05809c feat: mcp - support audiences for OAuth2 (#5265) 2025-08-05 22:02:16 +00:00
Ramón Medrano Llamas
29c3825604 fix(mcp): clear prompt registry on refresh to prevent duplicates (#5385)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
Co-authored-by: Sandy Tao <sandytao520@icloud.com>
2025-08-05 21:59:31 +00:00
Hiroaki Mitsuyoshi
faf6a5497a feat(docs): Add /chat delete command in commands.md (#5408)
Co-authored-by: Sandy Tao <sandytao520@icloud.com>
2025-08-05 21:58:09 +00:00
Jacob Richman
dd85aaa951 bug(core): Fix flaky test by using waitFor. (#5540)
Co-authored-by: Sandy Tao <sandytao520@icloud.com>
2025-08-05 21:56:38 +00:00
Gal Zahavi
aacae1de43 fix(core): prevent UI shift after vim edit (#5315) 2025-08-05 21:55:54 +00:00
Sandy Tao
8d993156e7 Fix format (#5617) 2025-08-05 21:38:43 +00:00
Bryan Morgan
57003ca68c Update weekly-velocity-report.yml 2025-08-05 17:18:19 -04:00
Bryan Morgan
47de37eb0a Update weekly-velocity-report.yml 2025-08-05 17:10:37 -04:00
Bryan Morgan
dc7b4fda64 Update weekly-velocity-report.yml 2025-08-05 17:08:22 -04:00
Bryan Morgan
3dcca31796 Update weekly-velocity-report.yml 2025-08-05 17:00:44 -04:00
Bryan Morgan
c194a6ac3b GitHub Action for velocity reporting purposes (#5607) 2025-08-05 20:33:59 +00:00
Bryan Morgan
d421fa9e64 Testing basic velocity report action 2025-08-05 15:55:50 -04:00
Luccas Paroni
2778c7d851 feat(core): Parse Multimodal MCP Tool responses (#5529)
Co-authored-by: Luccas Paroni <luccasparoni@google.com>
2025-08-05 19:19:47 +00:00
Oleksandr Gotgelf
b465145229 chore(settings): clean up comments in settings.ts (#5576) 2025-08-05 19:10:16 +00:00
Alexander J
f2d6748432 fix: small typo in ROADMAP.md (#5593)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-05 19:04:10 +00:00
joshualitt
08f1431946 bug(core): fix contentRangeTruncated calculation. (#5329)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-05 18:52:39 +00:00
David East
43d5aaa798 fix(mcp): ensure authorization url is valid when containing query params (#5545)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-05 18:44:30 +00:00
Yuki Okita
5c8268b6f4 feat: Multi-Directory Workspace Support (part 3: configuration in settings.json) (#5354)
Co-authored-by: Allen Hutchison <adh@google.com>
2025-08-05 17:01:01 +00:00
Jack Wotherspoon
d0cda58f1f docs: update typo in commands.md (#5584) 2025-08-05 14:03:58 +00:00
Yiheng Xu
9ffeacc0f9 fix tool
Co-Authored-By: Qwen-Coder <qwen-coder@alibabacloud.com>
2025-08-05 17:09:25 +08:00
Yiheng Xu
cd375fefe5 sync gemini-cli 0.1.17
Co-Authored-By: Qwen-Coder <qwen-coder@alibabacloud.com>
2025-08-05 17:09:19 +08:00
N. Taylor Mullen
c7a1de4983 chore(release): v0.1.17 (#5561) 2025-08-04 21:37:32 -07:00
DeWitt Clinton
49001a0f83 Remove the "local modifications" string from bug and about reports. (#5552) 2025-08-05 04:01:19 +00:00
Olcan
11ecf6fc86 fix self-reference in build script (#5548) 2025-08-05 01:12:21 +00:00
github-actions[bot]
42a0336876 chore(release): v0.1.17 2025-08-05 00:30:08 +00:00
Harold Mciver
99ba2f6424 Update MCP client to connect to servers with only prompts (#5290) 2025-08-04 21:38:23 +00:00
Harold Mciver
a7ea4ce0c8 Update MCP client to connect to servers with only prompts (#5290) 2025-08-04 21:38:23 +00:00
christine betts
93f8fe3671 [ide-mode] Add openDiff tool to IDE MCP server (#4519)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-04 21:36:23 +00:00
christine betts
d54780edda [ide-mode] Add openDiff tool to IDE MCP server (#4519)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-04 21:36:23 +00:00
Mo Moadeli
e7b468e122 feat(cli): Prevent redundant opening of browser tabs when zero MCP servers are configured (#5367)
Co-authored-by: Allen Hutchison <adh@google.com>
2025-08-04 21:20:49 +00:00
Mo Moadeli
3562ab8f5c feat(cli): Prevent redundant opening of browser tabs when zero MCP servers are configured (#5367)
Co-authored-by: Allen Hutchison <adh@google.com>
2025-08-04 21:20:49 +00:00
Shreya Keshive
dca040908a ide-mode flag cleanup (#5531) 2025-08-04 21:06:50 +00:00
Shreya Keshive
7b03d057ea ide-mode flag cleanup (#5531) 2025-08-04 21:06:50 +00:00
Shreya Keshive
2180dd13dc Improve user-facing error messages for IDE mode (#5522) 2025-08-04 21:06:17 +00:00
Shreya Keshive
0895e29c1b Improve user-facing error messages for IDE mode (#5522) 2025-08-04 21:06:17 +00:00
Richie Foreman
11808ef7ed fix(core): Allow model to be set from settings.json (#5527) 2025-08-04 20:41:58 +00:00
Richie Foreman
fb6d9cbd36 fix(core): Allow model to be set from settings.json (#5527) 2025-08-04 20:41:58 +00:00
Sandy Tao
8da6d23688 refactor(core): Rename useSlashCompletion to useCommandCompletion (#5532) 2025-08-04 20:35:26 +00:00
Sandy Tao
48fa6f84c8 refactor(core): Rename useSlashCompletion to useCommandCompletion (#5532) 2025-08-04 20:35:26 +00:00
Seth Vargo
37b83e05a7 Use new URLs for downloading workflows (#5524) 2025-08-04 20:10:36 +00:00
Seth Vargo
016a263409 Use new URLs for downloading workflows (#5524) 2025-08-04 20:10:36 +00:00
Jacob MacDonald
5caf23d627 remove unnecessary checks in WriteFileChecks.getDescription (#5526) 2025-08-04 19:12:33 +00:00
Jacob MacDonald
12fc17bc8c remove unnecessary checks in WriteFileChecks.getDescription (#5526) 2025-08-04 19:12:33 +00:00
Sandy Tao
d1bfba1abb feat(core): Add trailing space when completing an at completion suggestion (#5475)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-04 18:30:59 +00:00
Sandy Tao
8ba12269d5 feat(core): Add trailing space when completing an at completion suggestion (#5475)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-04 18:30:59 +00:00
Sandy Tao
b9fe4fc263 feat(cli): Handle Punctuation in @ Command Parsing (#5482) 2025-08-04 17:49:15 +00:00
Sandy Tao
02e44e5db2 feat(cli): Handle Punctuation in @ Command Parsing (#5482) 2025-08-04 17:49:15 +00:00
Pyush Sinha
e506b40c27 fix: /help remove flickering and respect clear shortcut (ctr+l) (#3611)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
Co-authored-by: Allen Hutchison <adh@google.com>
2025-08-04 16:53:50 +00:00
Pyush Sinha
ca19aa9125 fix: /help remove flickering and respect clear shortcut (ctr+l) (#3611)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
Co-authored-by: Allen Hutchison <adh@google.com>
2025-08-04 16:53:50 +00:00
owenofbrien
83a04c4755 Cloud Shell surface logging fix (#5364) 2025-08-04 16:48:46 +00:00
owenofbrien
16d29e2d6f Cloud Shell surface logging fix (#5364) 2025-08-04 16:48:46 +00:00
matt korwel
94b7b402c5 feat(docs): create new documentation for automation and triage (#5363) 2025-08-04 08:49:14 -07:00
matt korwel
cdbe26b811 feat(docs): create new documentation for automation and triage (#5363) 2025-08-04 08:49:14 -07:00
koalazf.99
f1146c4b2e fix: ci 2025-08-04 22:06:35 +08:00
koalazf.99
0af8b65407 test pr 2025-08-04 18:21:25 +08:00
koalazf.99
db1e358081 add: @qwen pr review 2025-08-04 17:58:01 +08:00
koalazf.99
a28bf81185 update: github workflow actions: pr triage 2025-08-04 17:35:06 +08:00
koalazf.99
d1964200f9 update: github workflow actions 2025-08-04 17:30:46 +08:00
koalazf.99
42ab185890 replace github token 2025-08-04 17:09:57 +08:00
koalazf.99
b2bff47fc7 update action version 2025-08-04 17:02:03 +08:00
koalazf.99
f1328b8437 fix: package dependency && issue traige 2025-08-04 16:20:24 +08:00
koalazf.99
54e41e3b31 skip create app token 2025-08-04 16:12:22 +08:00
koalazf.99
c306cd89fc skip create app token 2025-08-04 16:10:30 +08:00
koalazf.99
0414768cf8 try: github actions 2025-08-04 15:32:20 +08:00
Kumbham Ajay Goud
a8984a9b30 Fix: Preserve conversation history when changing auth methods via /auth (#5216)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-03 22:03:01 +00:00
Kumbham Ajay Goud
bdfff529aa Fix: Preserve conversation history when changing auth methods via /auth (#5216)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-03 22:03:01 +00:00
Ali Al Jufairi
acd48a1259 docs(fix): Update themes documentation to include new color keys for… (#5467) 2025-08-03 21:56:27 +00:00
Ali Al Jufairi
f83c6168ad docs(fix): Update themes documentation to include new color keys for… (#5467) 2025-08-03 21:56:27 +00:00
N. Taylor Mullen
70478b92a9 chore(release): v0.1.16 (#5478) 2025-08-03 13:38:03 -07:00
N. Taylor Mullen
c7d1a28ac6 chore(release): v0.1.16 (#5478) 2025-08-03 13:38:03 -07:00
Shreya Keshive
2cdaf912ba Generate NOTICES.TXT and surface via command (#5310) 2025-08-03 20:19:34 +00:00
Shreya Keshive
4f69b2d8dc Generate NOTICES.TXT and surface via command (#5310) 2025-08-03 20:19:34 +00:00
Ayesha Shafique
072d8ba289 feat: Add reverse search capability for shell commands (#4793) 2025-08-03 19:53:24 +00:00
Ayesha Shafique
0335ce5ecc feat: Add reverse search capability for shell commands (#4793) 2025-08-03 19:53:24 +00:00
Oleksandr Gotgelf
03ed37d0dc fix: exclude DEBUG and DEBUG_MODE from project .env files by default (#5289)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-03 18:44:15 +00:00
Oleksandr Gotgelf
c0b4fc9506 fix: exclude DEBUG and DEBUG_MODE from project .env files by default (#5289)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-03 18:44:15 +00:00
Billy Biggs
bedcbb9feb Add a setting to disable the version update nag message (#5449) 2025-08-03 18:20:55 +00:00
Billy Biggs
ae8a8f6062 Add a setting to disable the version update nag message (#5449) 2025-08-03 18:20:55 +00:00
Gal Zahavi
820169ba2e feat(autoupdate): Improve update check and refactor for testability (#5389) 2025-08-02 03:17:32 +00:00
Gal Zahavi
8d5fa18893 feat(autoupdate): Improve update check and refactor for testability (#5389) 2025-08-02 03:17:32 +00:00
TIRUMALASETTI PRANITH
15a1f1af9d fix(config): Resolve duplicate config loading from home directory (#5090)
Co-authored-by: Allen Hutchison <adh@google.com>
Co-authored-by: Allen Hutchison <allen@hutchison.org>
2025-08-01 22:22:17 +00:00
TIRUMALASETTI PRANITH
f50ec186b5 fix(config): Resolve duplicate config loading from home directory (#5090)
Co-authored-by: Allen Hutchison <adh@google.com>
Co-authored-by: Allen Hutchison <allen@hutchison.org>
2025-08-01 22:22:17 +00:00
Allen Hutchison
387706607d fix(tests): refactor integration tests to be less flaky (#4890)
Co-authored-by: matt korwel <matt.korwel@gmail.com>
2025-08-01 21:33:33 +00:00
Allen Hutchison
321e1e25c7 fix(tests): refactor integration tests to be less flaky (#4890)
Co-authored-by: matt korwel <matt.korwel@gmail.com>
2025-08-01 21:33:33 +00:00
mrcabbage972
dccca91fc9 Switch utility calls to use the gemini-2.5-flash-lite model (#5193)
Co-authored-by: Anjali Sridhar <anjsridhar@gmail.com>
2025-08-01 21:11:51 +00:00
mrcabbage972
82972e4b03 Switch utility calls to use the gemini-2.5-flash-lite model (#5193)
Co-authored-by: Anjali Sridhar <anjsridhar@gmail.com>
2025-08-01 21:11:51 +00:00
owenofbrien
a6a386f72a Propagate prompt (#5033) 2025-08-01 19:37:56 +00:00
owenofbrien
8484730cd6 Propagate prompt (#5033) 2025-08-01 19:37:56 +00:00
joshualitt
67d16992cf bug(cli): Prefer IPv4 dns resolution by default. (#5338) 2025-08-01 19:30:39 +00:00
joshualitt
e5ce7d4872 bug(cli): Prefer IPv4 dns resolution by default. (#5338) 2025-08-01 19:30:39 +00:00
Santhosh Kumar
9382334a5e feat(github): add workflow to manage stale issues and PRs (#4871)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-01 19:26:03 +00:00
Santhosh Kumar
786750b1b5 feat(github): add workflow to manage stale issues and PRs (#4871)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-01 19:26:03 +00:00
Sandy Tao
c795168e9c feat(core): Use completionStart/End for slash command auto-completion (#5374) 2025-08-01 18:51:38 +00:00
Sandy Tao
e7699ddfb1 feat(core): Use completionStart/End for slash command auto-completion (#5374) 2025-08-01 18:51:38 +00:00
Billy Biggs
24c5a15d7a Add a setting to disable auth mode validation on startup (#5358) 2025-08-01 18:49:03 +00:00
Billy Biggs
cab60a38a1 Add a setting to disable auth mode validation on startup (#5358) 2025-08-01 18:49:03 +00:00
andrea-berling
c725e258c6 feat(sandbox): Add SANDBOX_FLAGS for custom container options (#2036)
Co-authored-by: matt korwel <matt.korwel@gmail.com>
2025-08-01 16:32:44 +00:00
andrea-berling
a2db3d1b38 feat(sandbox): Add SANDBOX_FLAGS for custom container options (#2036)
Co-authored-by: matt korwel <matt.korwel@gmail.com>
2025-08-01 16:32:44 +00:00
Brian de Alwis
d42e3f1e7f doc: use standard Google security policy for GitHub projects (#5062) 2025-08-01 16:12:32 +00:00
Brian de Alwis
eafcfcd169 doc: use standard Google security policy for GitHub projects (#5062) 2025-08-01 16:12:32 +00:00
Silvio Junior
7748e56153 [Fix Telemetry for tool calls, PR 1/n] Propagate tool reported errors via ToolCallResponseInfo and ToolResult (#5222) 2025-08-01 15:20:08 +00:00
Silvio Junior
0d23195624 [Fix Telemetry for tool calls, PR 1/n] Propagate tool reported errors via ToolCallResponseInfo and ToolResult (#5222) 2025-08-01 15:20:08 +00:00
cornmander
e126d2fcd9 Add missing emacs entry in UI. (#5351) 2025-08-01 14:40:05 +00:00
cornmander
138e52b61e Add missing emacs entry in UI. (#5351) 2025-08-01 14:40:05 +00:00
neo
a5a3da01f6 doc: Add links to translated README versions
Added language selection links to the README for easier access to translated versions: German, Spanish, French, Japanese, Korean, Portuguese, Russian, and Chinese.
2025-08-01 15:18:26 +08:00
Brian Ray
dc9f17bb4a New browser launcher for MCP OAuth. (#5261) 2025-08-01 05:47:22 +00:00
Brian Ray
78435ab0bf New browser launcher for MCP OAuth. (#5261) 2025-08-01 05:47:22 +00:00
Sandy Tao
f21ff09389 fix(core): Remove json output schema form the next speaker check prompt (#5325)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-01 01:17:52 +00:00
Sandy Tao
ef445212f6 fix(core): Remove json output schema form the next speaker check prompt (#5325)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-01 01:17:52 +00:00
Raushan Raj
6c3fb18ef6 Update gemini-automated-issue-triage.yml (#5312)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-01 01:14:26 +00:00
Raushan Raj
c1157352b7 Update gemini-automated-issue-triage.yml (#5312)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-08-01 01:14:26 +00:00
Tommaso Sciortino
a3a432e3cf Fix bug executing commands in windows whose flags contain spaces (#5317) 2025-08-01 00:27:07 +00:00
Tommaso Sciortino
9397336a15 Fix bug executing commands in windows whose flags contain spaces (#5317) 2025-08-01 00:27:07 +00:00
Miguel Solorio
6f7beb414c Highlight slash commands in history (#5323) 2025-07-31 23:24:23 +00:00
Miguel Solorio
8e6c715b0f Highlight slash commands in history (#5323) 2025-07-31 23:24:23 +00:00
Jacob Richman
61e382444a fix(ux) bug in replaceRange dealing with newLines that was breaking vim support (#5320) 2025-07-31 23:16:29 +00:00
Jacob Richman
750e647988 fix(ux) bug in replaceRange dealing with newLines that was breaking vim support (#5320) 2025-07-31 23:16:29 +00:00
Sandy Tao
32809a7be7 feat(cli): Improve @ autocompletion for mid-sentence edits (#5321) 2025-07-31 23:07:12 +00:00
Sandy Tao
150a2568b4 feat(cli): Improve @ autocompletion for mid-sentence edits (#5321) 2025-07-31 23:07:12 +00:00
Paige Bailey
37a3f1e6b6 Add emacs support, as per user requests. :) (#1633)
Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
Co-authored-by: N. Taylor Mullen <ntaylormullen@google.com>
Co-authored-by: Jacob Richman <jacob314@gmail.com>
Co-authored-by: matt korwel <matt.korwel@gmail.com>
Co-authored-by: matt korwel <mattkorwel@google.com>
2025-07-31 22:46:04 +00:00
Paige Bailey
598b2cf7f4 Add emacs support, as per user requests. :) (#1633)
Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
Co-authored-by: N. Taylor Mullen <ntaylormullen@google.com>
Co-authored-by: Jacob Richman <jacob314@gmail.com>
Co-authored-by: matt korwel <matt.korwel@gmail.com>
Co-authored-by: matt korwel <mattkorwel@google.com>
2025-07-31 22:46:04 +00:00
JeromeJu
574015edd9 feat: Implement /setup-github command (#5069) 2025-07-31 22:14:22 +00:00
JeromeJu
8be10b4c09 feat: Implement /setup-github command (#5069) 2025-07-31 22:14:22 +00:00
Yuki Okita
f9a05401c1 feat: Multi-Directory Workspace Support (part2: add "directory" command) (#5241) 2025-07-31 19:02:08 +00:00
Yuki Okita
0c0881348d feat: Multi-Directory Workspace Support (part2: add "directory" command) (#5241) 2025-07-31 19:02:08 +00:00
Niladri Das
9a6422f331 fix: CLAUDE.md compatibility for GEMINI.md '@' file import behavior (#2978)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Allen Hutchison <adh@google.com>
2025-07-31 16:36:50 +00:00
Niladri Das
8550d70a57 fix: CLAUDE.md compatibility for GEMINI.md '@' file import behavior (#2978)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Allen Hutchison <adh@google.com>
2025-07-31 16:36:50 +00:00
joshualitt
ae86c7ba05 bug(core): UI reporting for truncated read_file. (#5155)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-07-31 16:31:14 +00:00
joshualitt
c80607ac15 bug(core): UI reporting for truncated read_file. (#5155)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-07-31 16:31:14 +00:00
anj-s
65be9cab47 Fix: Ensure that non interactive mode and interactive mode are calling the same entry points (#5137) 2025-07-31 12:36:12 +00:00
anj-s
ceccdf9d2c Fix: Ensure that non interactive mode and interactive mode are calling the same entry points (#5137) 2025-07-31 12:36:12 +00:00
Sandy Tao
23c014e29c Replace FlashDecidedToContinueEvent with NextSpeakerCheckEvent (#5257) 2025-07-31 04:47:04 +00:00
Sandy Tao
7ca978f3a0 Replace FlashDecidedToContinueEvent with NextSpeakerCheckEvent (#5257) 2025-07-31 04:47:04 +00:00
Kazunari001
3ef2c6d198 feat(docs): Add /init command in commands.md (#5187)
Co-authored-by: saucykazugmail <saucydog0922@gmail.com>
Co-authored-by: Gal Zahavi <38544478+galz10@users.noreply.github.com>
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-07-31 01:52:40 +00:00
Kazunari001
54ec18141c feat(docs): Add /init command in commands.md (#5187)
Co-authored-by: saucykazugmail <saucydog0922@gmail.com>
Co-authored-by: Gal Zahavi <38544478+galz10@users.noreply.github.com>
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-07-31 01:52:40 +00:00
Seth Troisi
c77a22d4c6 Add render counter in debug mode (#5242)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-07-31 00:43:11 +00:00
Seth Troisi
72af6e077f Add render counter in debug mode (#5242)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-07-31 00:43:11 +00:00
Gal Zahavi
d06e17fbd9 Improve error message for discoverTools function (#4157) 2025-07-31 00:16:21 +00:00
Gal Zahavi
152de2b6d8 Improve error message for discoverTools function (#4157) 2025-07-31 00:16:21 +00:00
Shreya Keshive
0c6f788406 Exclude companion extension from release versioning (#5226)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-07-30 22:49:26 +00:00
Shreya Keshive
8b645ff688 Exclude companion extension from release versioning (#5226)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-07-30 22:49:26 +00:00
christine betts
325bb89137 Add toggleable IDE mode setting (#5146) 2025-07-30 22:36:24 +00:00
christine betts
aad8893322 Add toggleable IDE mode setting (#5146) 2025-07-30 22:36:24 +00:00
Olcan
ac1bb5ee42 confirm save_memory tool, with ability to see diff and edit manually for advanced changes that may override past memories (#5237) 2025-07-30 22:21:31 +00:00
Olcan
e70d2bf6d5 confirm save_memory tool, with ability to see diff and edit manually for advanced changes that may override past memories (#5237) 2025-07-30 22:21:31 +00:00
Allen Hutchison
498edb57ab fix(testing): make ModelStatsDisplay snapshot test deterministic (#5236)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-07-30 22:09:32 +00:00
Allen Hutchison
5984eba070 fix(testing): make ModelStatsDisplay snapshot test deterministic (#5236)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-07-30 22:09:32 +00:00
christine betts
7bc8766542 Introduce IDE mode installer (#4877) 2025-07-30 21:26:31 +00:00
christine betts
3e1b2dc33a Introduce IDE mode installer (#4877) 2025-07-30 21:26:31 +00:00
Yuki Okita
c1fe688956 feat: Multi-Directory Workspace Support (part1: add --include-directories option) (#4605)
Co-authored-by: Allen Hutchison <adh@google.com>
2025-07-30 20:38:20 +00:00
Yuki Okita
cb6a2161fe feat: Multi-Directory Workspace Support (part1: add --include-directories option) (#4605)
Co-authored-by: Allen Hutchison <adh@google.com>
2025-07-30 20:38:20 +00:00
Srinath Padmanabhan
21965f986c Srithreepo Fixes for Scheduled triage (#5158) 2025-07-30 20:38:02 +00:00
Srinath Padmanabhan
8fabce2c04 Srithreepo Fixes for Scheduled triage (#5158) 2025-07-30 20:38:02 +00:00
shamso-goog
32b1ef3779 feat(ui): Update tool confirmation cancel button text (#4820)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-07-30 20:37:51 +00:00
shamso-goog
f7c2091389 feat(ui): Update tool confirmation cancel button text (#4820)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-07-30 20:37:51 +00:00
Hyunsu Shin
bcce1e7b84 perf(core): parallelize bfsFileSearch for 40% faster CLI startup (#5185) 2025-07-30 17:32:03 +00:00
Hyunsu Shin
35811d534a perf(core): parallelize bfsFileSearch for 40% faster CLI startup (#5185) 2025-07-30 17:32:03 +00:00
Olcan
bc23009f61 do not mention GEMINI.md in system prompt as it is not fixed and can confuse model as it is not mentioned by memory tool and memory file paths are generally not exposed to model (yet) (#5202) 2025-07-30 17:21:15 +00:00
Olcan
8378fbf7b2 do not mention GEMINI.md in system prompt as it is not fixed and can confuse model as it is not mentioned by memory tool and memory file paths are generally not exposed to model (yet) (#5202) 2025-07-30 17:21:15 +00:00
yaksh gandhi
b447c329db docs: Update chat command documentation with checkpoint locations (#5027)
Co-authored-by: Bryan Morgan <bryanmorgan@google.com>
Co-authored-by: F. Hinkelmann <franziska.hinkelmann@gmail.com>
2025-07-30 10:01:08 +00:00
yaksh gandhi
658a7b49df docs: Update chat command documentation with checkpoint locations (#5027)
Co-authored-by: Bryan Morgan <bryanmorgan@google.com>
Co-authored-by: F. Hinkelmann <franziska.hinkelmann@gmail.com>
2025-07-30 10:01:08 +00:00
N. Taylor Mullen
fd434626c5 chore(release): v0.1.15 (#5163) 2025-07-29 22:03:54 -07:00
N. Taylor Mullen
f0d80dfe23 chore(release): v0.1.15 (#5163) 2025-07-29 22:03:54 -07:00
Sandy Tao
8985e489a5 Skip and reset loop checking around code blocks (#5144) 2025-07-30 04:05:03 +00:00
Sandy Tao
85a0ed27f6 Skip and reset loop checking around code blocks (#5144) 2025-07-30 04:05:03 +00:00
Jenna Inouye
0ce89392b8 Docs: add documentation for .geminiignore (#5123)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-07-30 03:36:26 +00:00
Jenna Inouye
61107ef19d Docs: add documentation for .geminiignore (#5123)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-07-30 03:36:26 +00:00
Sambhav Khanna
d5a1b717c2 fix(update): correctly report new updates (#4821)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-07-30 00:11:15 +00:00
Sambhav Khanna
0b912e2e09 fix(update): correctly report new updates (#4821)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-07-30 00:11:15 +00:00
Allen Hutchison
091804c750 feat(docs): document GEMINI.md import syntax (#5145) 2025-07-29 23:41:31 +00:00
Allen Hutchison
c156fb0e8b feat(docs): document GEMINI.md import syntax (#5145) 2025-07-29 23:41:31 +00:00
Ava
d64c3d6af8 Add Starcraft ref to witty loading phrases (#5120)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-07-29 23:22:13 +00:00
Ava
1a92614c84 Add Starcraft ref to witty loading phrases (#5120)
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-07-29 23:22:13 +00:00
Tommaso Sciortino
327f915610 Fix typo in RFC 9728 impl (#5126) 2025-07-29 23:03:39 +00:00
Tommaso Sciortino
ed2b4c6aa4 Fix typo in RFC 9728 impl (#5126) 2025-07-29 23:03:39 +00:00
Srinath Padmanabhan
008051e42d Update Triage Logic to improve issue categorization. (#5110) 2025-07-29 21:44:48 +00:00
Srinath Padmanabhan
32c7070d7f Update Triage Logic to improve issue categorization. (#5110) 2025-07-29 21:44:48 +00:00
Shreya Keshive
293bb82019 Adds centralized support to log slash commands + sub commands (#5128) 2025-07-29 20:20:37 +00:00
Shreya Keshive
a2c3dbd189 Adds centralized support to log slash commands + sub commands (#5128) 2025-07-29 20:20:37 +00:00
shamso-goog
80079cd2a5 feat(cli): introduce /init command for GEMINI.md creation (#4852)
Co-authored-by: matt korwel <matt.korwel@gmail.com>
2025-07-29 16:49:01 +00:00
shamso-goog
72d6ef2d3c feat(cli): introduce /init command for GEMINI.md creation (#4852)
Co-authored-by: matt korwel <matt.korwel@gmail.com>
2025-07-29 16:49:01 +00:00
Daniel Lee
7356764a48 feat(commands): add custom commands support for extensions (#4703) 2025-07-29 01:40:47 +00:00
Daniel Lee
bf7fd08f7e feat(commands): add custom commands support for extensions (#4703) 2025-07-29 01:40:47 +00:00
Gal Zahavi
871e0dfab8 feat: Add auto update functionality (#4686) 2025-07-29 00:56:52 +00:00
Gal Zahavi
c42d3b58e1 feat: Add auto update functionality (#4686) 2025-07-29 00:56:52 +00:00
Shreya Keshive
83c4dddb7e Only enable IDE integration if gemini-cli is running in the same path as open workspace (#5068) 2025-07-28 20:55:00 +00:00
Shreya Keshive
69c6808b14 Only enable IDE integration if gemini-cli is running in the same path as open workspace (#5068) 2025-07-28 20:55:00 +00:00
Seth Troisi
1c1aa047ff feat: Add tests for checkpoint tag sanitization (#4882) 2025-07-28 20:43:39 +00:00
Seth Troisi
3091980de2 feat: Add tests for checkpoint tag sanitization (#4882) 2025-07-28 20:43:39 +00:00
Abhi
b08679c906 Add new fallback state as prefactor for routing (#5065) 2025-07-28 19:55:50 +00:00
Abhi
cb39eef7b5 Add new fallback state as prefactor for routing (#5065) 2025-07-28 19:55:50 +00:00
Danny
b6c2c64f9b Adds docs outlining keyboard shortcuts for gemini-cli (#4727)
Co-authored-by: dannyzen <dannyrosen@google.com>
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-07-28 19:35:06 +00:00
Danny
40db8cde97 Adds docs outlining keyboard shortcuts for gemini-cli (#4727)
Co-authored-by: dannyzen <dannyrosen@google.com>
Co-authored-by: Jacob Richman <jacob314@gmail.com>
2025-07-28 19:35:06 +00:00
Shreya Keshive
cfe3753d4c Refactors companion VS Code extension to import & use notification schema defined in gemini-cli (#5059) 2025-07-28 18:20:56 +00:00
Shreya Keshive
787aa624da Refactors companion VS Code extension to import & use notification schema defined in gemini-cli (#5059) 2025-07-28 18:20:56 +00:00
N. Taylor Mullen
9aef0a8e6c Revert "feat: Add /config refresh command" (#5060) 2025-07-28 18:13:46 +00:00
N. Taylor Mullen
56c2d95a4c Revert "feat: Add /config refresh command" (#5060) 2025-07-28 18:13:46 +00:00
Neha Prasad
a5ea113a8e fix: Clear previous thoughts when starting new prompts (#4966) 2025-07-28 17:57:33 +00:00
Neha Prasad
4b3e407d49 fix: Clear previous thoughts when starting new prompts (#4966) 2025-07-28 17:57:33 +00:00
christine betts
379765da23 Add documentation for MCP prompts (#4897)
Co-authored-by: matt korwel <matt.korwel@gmail.com>
Co-authored-by: Bryan Morgan <bryanmorgan@google.com>
2025-07-28 16:01:15 +00:00
christine betts
f1f0da6dc9 Add documentation for MCP prompts (#4897)
Co-authored-by: matt korwel <matt.korwel@gmail.com>
Co-authored-by: Bryan Morgan <bryanmorgan@google.com>
2025-07-28 16:01:15 +00:00
Alexander Parshakov
f7e559223d docs: Add more examples to Popular tasks (#4979)
Co-authored-by: Bryan Morgan <bryanmorgan@google.com>
2025-07-28 15:54:09 +00:00
Alexander Parshakov
4de893da0d docs: Add more examples to Popular tasks (#4979)
Co-authored-by: Bryan Morgan <bryanmorgan@google.com>
2025-07-28 15:54:09 +00:00
Ramón Medrano Llamas
0170791800 feat: Add /config refresh command (#4993)
Co-authored-by: Bryan Morgan <bryanmorgan@google.com>
2025-07-28 15:46:43 +00:00
Ramón Medrano Llamas
02bf8c16c7 feat: Add /config refresh command (#4993)
Co-authored-by: Bryan Morgan <bryanmorgan@google.com>
2025-07-28 15:46:43 +00:00
Shreya Keshive
e275441651 Updates schema, UX and prompt for IDE context (#5046) 2025-07-28 15:03:22 +00:00
Shreya Keshive
165b29c3b1 Updates schema, UX and prompt for IDE context (#5046) 2025-07-28 15:03:22 +00:00
James Woo
f2e006179d Fix author attribution (#5042) 2025-07-28 14:45:23 +00:00
James Woo
16322ed0b2 Fix author attribution (#5042) 2025-07-28 14:45:23 +00:00
N. Taylor Mullen
bd85070411 Revert "Propagate user_prompt_id to GenerateConentRequest for logging" (#5007) 2025-07-27 19:28:20 -07:00
N. Taylor Mullen
e1f9f90660 Revert "Propagate user_prompt_id to GenerateConentRequest for logging" (#5007) 2025-07-27 19:28:20 -07:00
Jenna Inouye
9ed351260c Update documentation for read_many_files. (#4874)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-07-27 22:25:04 +00:00
Jenna Inouye
0371f638c0 Update documentation for read_many_files. (#4874)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-07-27 22:25:04 +00:00
Jenna Inouye
ab0d9df658 Clarify ToS and privacy documentation FAQs. (#4899) 2025-07-27 22:24:53 +00:00
Jenna Inouye
79703c8ecb Clarify ToS and privacy documentation FAQs. (#4899) 2025-07-27 22:24:53 +00:00
Hiroaki Mitsuyoshi
bce6eb5014 feat(chat): Implement /chat delete command (#2401) 2025-07-27 22:18:12 +00:00
Hiroaki Mitsuyoshi
f3ffb00ed0 feat(chat): Implement /chat delete command (#2401) 2025-07-27 22:18:12 +00:00
Leeroy Ding
9ca48c00a6 fix: yolo mode not respected (#4972) 2025-07-27 21:42:26 +00:00
Leeroy Ding
9d07de7a5b fix: yolo mode not respected (#4972) 2025-07-27 21:42:26 +00:00
Abhi
0b5cc96362 (model) - Use Flash Lite For Next Speaker Checks (#4991) 2025-07-27 21:40:55 +00:00
Abhi
3a384784d7 (model) - Use Flash Lite For Next Speaker Checks (#4991) 2025-07-27 21:40:55 +00:00
owenofbrien
b497791c59 Propagate user_prompt_id to GenerateConentRequest for logging (#4741) 2025-07-27 21:34:39 +00:00
owenofbrien
e7b90f54e6 Propagate user_prompt_id to GenerateConentRequest for logging (#4741) 2025-07-27 21:34:39 +00:00
Abhi
36e1e57252 (docs) - Fix small markdown mistake for custom commands docs (#4983) 2025-07-27 21:33:58 +00:00
Abhi
8e983466f8 (docs) - Fix small markdown mistake for custom commands docs (#4983) 2025-07-27 21:33:58 +00:00
Hyeladi Bassi
a9f04eba2c refactor(telemetry): enhance flushToClearcut method with retry logic and early return for empty events (#1601)
Co-authored-by: Scott Densmore <scottdensmore@mac.com>
2025-07-27 18:18:27 +00:00
Hyeladi Bassi
1f013c969f refactor(telemetry): enhance flushToClearcut method with retry logic and early return for empty events (#1601)
Co-authored-by: Scott Densmore <scottdensmore@mac.com>
2025-07-27 18:18:27 +00:00
438 changed files with 52832 additions and 11860 deletions

View File

@@ -0,0 +1,65 @@
name: Build and Publish Docker Image
on:
push:
tags:
- 'v*'
workflow_dispatch:
inputs:
publish:
description: 'Publish to GHCR (only works on main branch)'
type: boolean
default: false
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build-and-push-to-ghcr:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=sha,prefix=sha-,format=short
- name: Log in to the Container registry
if: github.event_name != 'pull_request' && (github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/v'))
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Docker image
id: build-and-push
uses: docker/build-push-action@v6
with:
context: .
platforms: linux/amd64,linux/arm64
push: ${{ github.event_name != 'pull_request' && (github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/v') || github.event.inputs.publish == 'true') }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
build-args: |
CLI_VERSION_ARG=${{ github.sha }}

View File

@@ -36,6 +36,14 @@ jobs:
- name: Run linter
run: npm run lint:ci
- name: Run linter on integration tests
run: npx eslint integration-tests --max-warnings 0
- name: Run formatter on integration tests
run: |
npx prettier --check integration-tests
git diff --exit-code
- name: Build project
run: npm run build

View File

@@ -1,13 +1,39 @@
name: Gemini Automated Issue Triage
name: Qwen Automated Issue Triage
on:
issues:
types: [opened, reopened]
types:
- 'opened'
- 'reopened'
issue_comment:
types:
- 'created'
workflow_dispatch:
inputs:
issue_number:
description: 'issue number to triage'
required: true
type: 'number'
concurrency:
group: '${{ github.workflow }}-${{ github.event.issue.number }}'
cancel-in-progress: true
defaults:
run:
shell: 'bash'
permissions:
contents: 'read'
id-token: 'write'
issues: 'write'
statuses: 'write'
packages: 'read'
jobs:
triage-issue:
timeout-minutes: 5
if: ${{ github.repository == 'google-gemini/gemini-cli' }}
if: ${{ github.repository == 'QwenLM/qwen-code' }}
permissions:
issues: write
contents: read
@@ -17,47 +43,285 @@ jobs:
cancel-in-progress: true
runs-on: ubuntu-latest
steps:
- name: Generate GitHub App Token
id: generate_token
uses: actions/create-github-app-token@df432ceedc7162793a195dd1713ff69aefc7379e # v2
with:
app-id: ${{ secrets.APP_ID }}
private-key: ${{ secrets.PRIVATE_KEY }}
- name: Run Gemini Issue Triage
uses: google-gemini/gemini-cli-action@df3f890f003d28c60a2a09d2c29e0126e4d1e2ff
- name: Run Qwen Issue Triage
uses: QwenLM/qwen-code-action@5fd6818d04d64e87d255ee4d5f77995e32fbf4c2
env:
GITHUB_TOKEN: ${{ steps.generate_token.outputs.token }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
ISSUE_TITLE: ${{ github.event.issue.title }}
ISSUE_BODY: ${{ github.event.issue.body }}
with:
version: 0.1.8-rc.0
GEMINI_API_KEY: ${{ secrets.GEMINI_API_KEY }}
OTLP_GCP_WIF_PROVIDER: ${{ secrets.OTLP_GCP_WIF_PROVIDER }}
OTLP_GOOGLE_CLOUD_PROJECT: ${{ secrets.OTLP_GOOGLE_CLOUD_PROJECT }}
version: 0.0.7
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
settings_json: |
{
"maxSessionTurns": 25,
"coreTools": [
"run_shell_command(echo)",
"run_shell_command(gh label list)",
"run_shell_command(gh issue edit)",
"run_shell_command(gh issue list)"
],
"telemetry": {
"enabled": true,
"target": "gcp"
},
"sandbox": false
}
prompt: |
You are an issue triage assistant. Analyze the current GitHub issue and apply the most appropriate existing labels.
prompt: |-
## Role
You are an issue triage assistant. Analyze the current GitHub issue and apply the most appropriate existing labels. Use the available
tools to gather information; do not ask for information to be provided. Do not remove labels titled help wanted or good first issue.
## Steps
Steps:
1. Run: `gh label list --repo ${{ github.repository }} --limit 100` to get all available labels.
2. Review the issue title and body provided in the environment variables.
3. Select the most relevant labels from the existing labels, focusing on kind/*, area/*, and priority/*.
4. Apply the selected labels to this issue using: `gh issue edit ${{ github.event.issue.number }} --repo ${{ github.repository }} --add-label "label1,label2"`
5. If the issue has a "status/need-triage" label, remove it after applying the appropriate labels: `gh issue edit ${{ github.event.issue.number }} --repo ${{ github.repository }} --remove-label "status/need-triage"`
2. Review the issue title and body provided in the environment variables: "${ISSUE_TITLE}" and "${ISSUE_BODY}".
3. Ignore any existing priorities or tags on the issue. Just report your findings.
4. Select the most relevant labels from the existing labels, focusing on kind/*, area/*, sub-area/* and priority/*. For area/* and kind/* limit yourself to only the single most applicable label in each case.
6. Apply the selected labels to this issue using: `gh issue edit ${{ github.event.issue.number }} --repo ${{ github.repository }} --add-label "label1,label2"`.
7. For each issue please check if CLI version is present, this is usually in the output of the /about command and will look like 0.1.5 for anything more than 6 versions older than the most recent should add the status/need-retesting label.
8. If you see that the issue doesnt look like it has sufficient information recommend the status/need-information label.
9. Use Area definitions mentioned below to help you narrow down issues.
## Guidelines
Guidelines:
- Only use labels that already exist in the repository.
- Do not add comments or modify the issue content.
- Triage only the current issue.
- Assign all applicable kind/*, area/*, and priority/* labels based on the issue content.
- Apply only one area/ label.
- Apply only one kind/ label.
- Apply all applicable sub-area/* and priority/* labels based on the issue content. It's ok to have multiple of these.
- Once you categorize the issue if it needs information bump down the priority by 1 eg.. a p0 would become a p1 a p1 would become a p2. P2 and P3 can stay as is in this scenario.
Categorization Guidelines:
P0: Critical / Blocker
- A P0 bug is a catastrophic failure that demands immediate attention. It represents a complete showstopper for a significant portion of users or for the development process itself.
Impact:
- Blocks development or testing for the entire team.
- Major security vulnerability that could compromise user data or system integrity.
- Causes data loss or corruption with no workaround.
- Crashes the application or makes a core feature completely unusable for all or most users in a production environment. Will it cause severe quality degration? Is it preventing contributors from contributing to the repository or is it a release blocker?
Qualifier: Is the main function of the software broken?
Example: The gemini auth login command fails with an unrecoverable error, preventing any user from authenticating and using the rest of the CLI.
P1: High
- A P1 bug is a serious issue that significantly degrades the user experience or impacts a core feature. While not a complete blocker, it's a major problem that needs a fast resolution. Feature requests are almost never P1.
Impact:
- A core feature is broken or behaving incorrectly for a large number of users or large number of use cases.
- Review the bug details and comments to try figure out if this issue affects a large set of use cases or if it's a narrow set of use cases.
- Severe performance degradation making the application frustratingly slow.
- No straightforward workaround exists, or the workaround is difficult and non-obvious.
Qualifier: Is a key feature unusable or giving very wrong results?
Example: The gemini -p "..." command consistently returns a malformed JSON response or an empty result, making the CLI's primary generation feature unreliable.
P2: Medium
- A P2 bug is a moderately impactful issue. It's a noticeable problem but doesn't prevent the use of the software's main functionality.
Impact:
- Affects a non-critical feature or a smaller, specific subset of users.
- An inconvenient but functional workaround is available and easy to execute.
- Noticeable UI/UX problems that don't break functionality but look unprofessional (e.g., elements are misaligned or overlapping).
Qualifier: Is it an annoying but non-blocking problem?
Example: An error message is unclear or contains a typo, causing user confusion but not halting their workflow.
P3: Low
- A P3 bug is a minor, low-impact issue that is trivial or cosmetic. It has little to no effect on the overall functionality of the application.
Impact:
- Minor cosmetic issues like color inconsistencies, typos in documentation, or slight alignment problems on a non-critical page.
- An edge-case bug that is very difficult to reproduce and affects a tiny fraction of users.
Qualifier: Is it a "nice-to-fix" issue?
Example: Spelling mistakes etc.
Things you should know:
- If users are talking about issues where the model gets downgraded from pro to flash then i want you to categorize that as a performance issue
- This product is designed to use different models eg.. using pro, downgrading to flash etc. when users report that they dont expect the model to change those would be categorized as feature requests.
Definition of Areas
area/ux:
- Issues concerning user-facing elements like command usability, interactive features, help docs, and perceived performance.
- I am seeing my screen flicker when using Gemini CLI
- I am seeing the output malformed
- Theme changes aren't taking effect
- My keyboard inputs arent' being recognzied
area/platform:
- Issues related to installation, packaging, OS compatibility (Windows, macOS, Linux), and the underlying CLI framework.
area/background: Issues related to long-running background tasks, daemons, and autonomous or proactive agent features.
area/models:
- i am not getting a response that is reasonable or expected. this can include things like
- I am calling a tool and the tool is not performing as expected.
- i am expecting a tool to be called and it is not getting called ,
- Including experience when using
- built-in tools (e.g., web search, code interpreter, read file, writefile, etc..),
- Function calling issues should be under this area
- i am getting responses from the model that are malformed.
- Issues concerning Gemini quality of response and inference,
- Issues talking about unnecessary token consumption.
- Issues talking about Model getting stuck in a loop be watchful as this could be the root cause for issues that otherwise seem like model performance issues.
- Memory compression
- unexpected responses,
- poor quality of generated code
area/tools:
- These are primarily issues related to Model Context Protocol
- These are issues that mention MCP support
- feature requests asking for support for new tools.
area/core: Issues with fundamental components like command parsing, configuration management, session state, and the main API client logic. Introducing multi-modality
area/contribution: Issues related to improving the developer contribution experience, such as CI/CD pipelines, build scripts, and test automation infrastructure.
area/authentication: Issues related to user identity, login flows, API key handling, credential storage, and access token management, unable to sign in selecting wrong authentication path etc..
area/security-privacy: Issues concerning vulnerability patching, dependency security, data sanitization, privacy controls, and preventing unauthorized data access.
area/extensibility: Issues related to the plugin system, extension APIs, or making the CLI's functionality available in other applications, github actions, ide support etc..
area/performance: Issues focused on model performance
- Issues with running out of capacity,
- 429 errors etc..
- could also pertain to latency,
- other general software performance like, memory usage, CPU consumption, and algorithmic efficiency.
- Switching models from one to the other unexpectedly.
- name: 'Post Issue Triage Failure Comment'
if: |-
${{ failure() && steps.gemini_issue_triage.outcome == 'failure' }}
uses: 'actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea'
with:
github-token: '${{ steps.generate_token.outputs.token }}'
script: |-
github.rest.issues.createComment({
owner: '${{ github.repository }}'.split('/')[0],
repo: '${{ github.repository }}'.split('/')[1],
issue_number: '${{ github.event.issue.number }}',
body: 'There is a problem with the Gemini CLI issue triaging. Please check the [action logs](${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}) for details.'
})
deduplicate-issues:
if: >
github.repository == 'google-gemini/gemini-cli' &&
vars.TRIAGE_DEDUPLICATE_ISSUES != '' &&
(github.event_name == 'issues' ||
github.event_name == 'workflow_dispatch' ||
(github.event_name == 'issue_comment' &&
contains(github.event.comment.body, '@gemini-cli /deduplicate') &&
(github.event.comment.author_association == 'OWNER' ||
github.event.comment.author_association == 'MEMBER' ||
github.event.comment.author_association == 'COLLABORATOR')))
timeout-minutes: 20
runs-on: 'ubuntu-latest'
steps:
- name: 'Checkout repository'
uses: 'actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683'
- name: 'Generate GitHub App Token'
id: 'generate_token'
uses: 'actions/create-github-app-token@df432ceedc7162793a195dd1713ff69aefc7379e'
with:
app-id: '${{ secrets.APP_ID }}'
private-key: '${{ secrets.PRIVATE_KEY }}'
- name: Log in to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: 'Run Gemini Issue Deduplication'
uses: 'google-github-actions/run-gemini-cli@20351b5ea2b4179431f1ae8918a246a0808f8747'
id: 'gemini_issue_deduplication'
env:
GITHUB_TOKEN: '${{ steps.generate_token.outputs.token }}'
ISSUE_TITLE: '${{ github.event.issue.title }}'
ISSUE_BODY: '${{ github.event.issue.body }}'
ISSUE_NUMBER: '${{ github.event.issue.number }}'
REPOSITORY: '${{ github.repository }}'
FIRESTORE_PROJECT: '${{ vars.FIRESTORE_PROJECT }}'
with:
gcp_workload_identity_provider: '${{ vars.GCP_WIF_PROVIDER }}'
gcp_project_id: '${{ vars.GOOGLE_CLOUD_PROJECT }}'
gcp_location: '${{ vars.GOOGLE_CLOUD_LOCATION }}'
gcp_service_account: '${{ vars.SERVICE_ACCOUNT_EMAIL }}'
gemini_api_key: '${{ secrets.GEMINI_API_KEY }}'
use_vertex_ai: '${{ vars.GOOGLE_GENAI_USE_VERTEXAI }}'
use_gemini_code_assist: '${{ vars.GOOGLE_GENAI_USE_GCA }}'
settings: |-
{
"mcpServers": {
"issue_deduplication": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"--network", "host",
"-e", "GITHUB_TOKEN",
"-e", "GEMINI_API_KEY",
"-e", "DATABASE_TYPE",
"-e", "FIRESTORE_DATABASE_ID",
"-e", "GCP_PROJECT",
"-e", "GOOGLE_APPLICATION_CREDENTIALS=/app/gcp-credentials.json",
"-v", "${GOOGLE_APPLICATION_CREDENTIALS}:/app/gcp-credentials.json",
"ghcr.io/google-gemini/gemini-cli-issue-triage@sha256:e3de1523f6c83aabb3c54b76d08940a2bf42febcb789dd2da6f95169641f94d3"
],
"env": {
"GITHUB_TOKEN": "${GITHUB_TOKEN}",
"GEMINI_API_KEY": "${{ secrets.GEMINI_API_KEY }}",
"DATABASE_TYPE":"firestore",
"GCP_PROJECT": "${FIRESTORE_PROJECT}",
"FIRESTORE_DATABASE_ID": "(default)",
"GOOGLE_APPLICATION_CREDENTIALS": "${GOOGLE_APPLICATION_CREDENTIALS}"
},
"enabled": true,
"timeout": 600000
}
},
"maxSessionTurns": 25,
"coreTools": [
"run_shell_command(echo)",
"run_shell_command(gh issue comment)",
"run_shell_command(gh issue view)",
"run_shell_command(gh issue edit)"
],
"telemetry": {
"enabled": true,
"target": "gcp"
}
}
prompt: |-
## Role
You are an issue de-duplication assistant. Your goal is to find
duplicate issues, label the current issue as a duplicate, and notify
the user by commenting on the current issue, while avoiding
duplicate comments.
## Steps
1. **Find Potential Duplicates:**
- The repository is ${{ github.repository }} and the issue number is ${{ github.event.issue.number }}.
- Use the `duplicates` tool with the `repo` and `issue_number` to find potential duplicates for the current issue. Do not use the `threshold` parameter.
- If no duplicates are found, you are done.
- Print the JSON output from the `duplicates` tool to the logs.
2. **Refine Duplicates List (if necessary):**
- If the `duplicates` tool returns between 1 and 14 results, you must refine the list.
- For each potential duplicate issue, run `gh issue view <issue-number> --json title,body,comments` to fetch its content.
- Also fetch the content of the original issue: `gh issue view "${ISSUE_NUMBER}" --json title,body,comments`.
- Carefully analyze the content (title, body, comments) of the original issue and all potential duplicates.
- It is very important if the comments on either issue mention that they are not duplicates of each other, to treat them as not duplicates.
- Based on your analysis, create a final list containing only the issues you are highly confident are actual duplicates.
- If your final list is empty, you are done.
- Print to the logs if you omitted any potential duplicates based on your analysis.
- If the `duplicates` tool returned 15+ results, use the top 15 matches (based on descending similarity score value) to perform this step.
3. **Format Final Duplicates List:**
Format the final list of duplicates into a markdown string.
The format should be:
"Found possible duplicate issues:\n\n- #${issue_number}\n\nIf you believe this is not a duplicate, please remove the `status/possible-duplicate` label."
Add an HTML comment to the end for identification: `<!-- gemini-cli-deduplication -->`
4. **Check for Existing Comment:**
- Run `gh issue view "${ISSUE_NUMBER}" --json comments` to get all
comments on the issue.
- Look for a comment made by a bot (the author's login often ends in `[bot]`) that contains `<!-- gemini-cli-deduplication -->`.
- If you find such a comment, store its `id` and `body`.
5. **Decide Action:**
- **If an existing comment is found:**
- Compare the new list of duplicate issues with the list from the existing comment's body.
- If they are the same, do nothing.
- If they are different, edit the existing comment. Use
`gh issue comment "${ISSUE_NUMBER}" --edit-comment <comment-id> --body "..."`.
The new body should be the new list of duplicates, but with the header "Found possible duplicate issues (updated):".
- **If no existing comment is found:**
- Create a new comment with the list of duplicates.
- Use `gh issue comment "${ISSUE_NUMBER}" --body "..."`.
6. **Add Duplicate Label:**
- If you created or updated a comment in the previous step, add the `duplicate` label to the current issue.
- Use `gh issue edit "${ISSUE_NUMBER}" --add-label "status/possible-duplicate"`.
## Guidelines
- Only use the `duplicates` and `run_shell_command` tools.
- The `run_shell_command` tool can be used with `gh issue view`, `gh issue comment`, and `gh issue edit`.
- Do not download or read media files like images, videos, or links. The `--json` flag for `gh issue view` will prevent this.
- Do not modify the issue content or status.
- Only comment on and label the current issue.
- Reference all shell variables as "${VAR}" (with quotes and braces).

View File

@@ -1,100 +1,207 @@
name: Gemini Scheduled Issue Triage
name: Qwen Scheduled Issue Triage
on:
schedule:
- cron: '0 * * * *' # Runs every hour
workflow_dispatch: {}
workflow_dispatch:
concurrency:
group: '${{ github.workflow }}'
cancel-in-progress: true
defaults:
run:
shell: 'bash'
permissions:
contents: 'read'
id-token: 'write'
issues: 'write'
statuses: 'write'
packages: 'read'
jobs:
triage-issues:
timeout-minutes: 10
if: ${{ github.repository == 'google-gemini/gemini-cli' }}
if: ${{ github.repository == 'QwenLM/qwen-code' }}
runs-on: ubuntu-latest
permissions:
contents: read
id-token: write
issues: write
steps:
- name: Generate GitHub App Token
id: generate_token
uses: actions/create-github-app-token@df432ceedc7162793a195dd1713ff69aefc7379e # v2
with:
app-id: ${{ secrets.APP_ID }}
private-key: ${{ secrets.PRIVATE_KEY }}
- name: Find untriaged issues
id: find_issues
env:
GITHUB_TOKEN: ${{ steps.generate_token.outputs.token }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
echo "🔍 Finding issues without labels..."
NO_LABEL_ISSUES=$(gh issue list --repo ${{ github.repository }} --search "is:open is:issue no:label" --json number,title,body)
echo "🏷️ Finding issues that need triage..."
NEED_TRIAGE_ISSUES=$(gh issue list --repo ${{ github.repository }} --search "is:open is:issue label:\"status/need-triage\"" --json number,title,body)
echo '🔍 Finding issues without labels...'
NO_LABEL_ISSUES="$(gh issue list --repo "${GITHUB_REPOSITORY}" \
--search 'is:open is:issue no:label' --json number,title,body)"
echo "🔄 Merging and deduplicating issues..."
ISSUES=$(echo "$NO_LABEL_ISSUES" "$NEED_TRIAGE_ISSUES" | jq -c -s 'add | unique_by(.number)')
echo '🏷️ Finding issues that need triage...'
NEED_TRIAGE_ISSUES="$(gh issue list --repo "${GITHUB_REPOSITORY}" \
--search 'is:open is:issue label:"status/needs-triage"' --json number,title,body)"
echo "📝 Setting output for GitHub Actions..."
echo "issues_to_triage=$ISSUES" >> "$GITHUB_OUTPUT"
echo '🔄 Merging and deduplicating issues...'
ISSUES="$(echo "${NO_LABEL_ISSUES}" "${NEED_TRIAGE_ISSUES}" | jq -c -s 'add | unique_by(.number)')"
echo "✅ Found $(echo "$ISSUES" | jq 'length') issues to triage! 🎯"
echo '📝 Setting output for GitHub Actions...'
echo "issues_to_triage=${ISSUES}" >> "${GITHUB_OUTPUT}"
- name: Run Gemini Issue Triage
- name: Run Qwen Issue Triage
if: steps.find_issues.outputs.issues_to_triage != '[]'
uses: google-gemini/gemini-cli-action@df3f890f003d28c60a2a09d2c29e0126e4d1e2ff
uses: QwenLM/qwen-code-action@5fd6818d04d64e87d255ee4d5f77995e32fbf4c2
env:
GITHUB_TOKEN: ${{ steps.generate_token.outputs.token }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
ISSUES_TO_TRIAGE: ${{ steps.find_issues.outputs.issues_to_triage }}
REPOSITORY: ${{ github.repository }}
with:
version: 0.1.8-rc.0
GEMINI_API_KEY: ${{ secrets.GEMINI_API_KEY }}
OTLP_GCP_WIF_PROVIDER: ${{ secrets.OTLP_GCP_WIF_PROVIDER }}
OTLP_GOOGLE_CLOUD_PROJECT: ${{ secrets.OTLP_GOOGLE_CLOUD_PROJECT }}
version: 0.0.7
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
OPENAI_BASE_URL: ${{ secrets.OPENAI_BASE_URL }}
OPENAI_MODEL: ${{ secrets.OPENAI_MODEL }}
settings_json: |
{
"maxSessionTurns": 25,
"coreTools": [
"run_shell_command(echo)",
"run_shell_command(gh label list)",
"run_shell_command(gh issue edit)",
"run_shell_command(gh issue view)",
"run_shell_command(gh issue list)"
],
"telemetry": {
"enabled": true,
"target": "gcp"
},
"sandbox": false
}
prompt: |
You are an issue triage assistant. Analyze issues and apply appropriate labels ONE AT A TIME.
prompt: |-
## Role
Repository: ${{ github.repository }}
You are an issue triage assistant. Analyze issues and apply
appropriate labels. Use the available tools to gather information;
do not ask for information to be provided.
Steps:
1. Run: `gh label list --repo ${{ github.repository }} --limit 100` to see available labels
## Steps
1. Run: `gh label list --repo ${{ github.repository }} --limit 100` to get all available labels.
2. Check environment variable for issues to triage: $ISSUES_TO_TRIAGE (JSON array of issues)
3. Parse the JSON array from step 2 and for EACH INDIVIDUAL issue, apply appropriate labels using separate commands:
- `gh issue edit ISSUE_NUMBER --repo ${{ github.repository }} --add-label "label1"`
- `gh issue edit ISSUE_NUMBER --repo ${{ github.repository }} --add-label "label2"`
- Continue for each label separately
IMPORTANT: Label each issue individually, one command per issue, one label at a time if needed.
Guidelines:
- Only use existing repository labels from step 1
- Do not add comments to issues
- Triage each issue independently based on title and body content
- Focus on applying: kind/* (bug/enhancement/documentation), area/* (core/cli/testing/windows), and priority/* labels
- If an issue has insufficient information, consider applying "status/need-information"
- After applying appropriate labels to an issue, remove the "status/need-triage" label if present: `gh issue edit ISSUE_NUMBER --repo ${{ github.repository }} --remove-label "status/need-triage"`
- Execute one `gh issue edit` command per issue, wait for success before proceeding to the next
Example triage logic:
- Issues with "bug", "error", "broken" → kind/bug
- Issues with "feature", "enhancement", "improve" → kind/enhancement
- Issues about Windows/performance → area/windows, area/performance
- Critical bugs → priority/p0, other bugs → priority/p1, enhancements → priority/p2
3. Review the issue title, body and any comments provided in the environment variables.
4. Ignore any existing priorities or tags on the issue.
5. Select the most relevant labels from the existing labels, focusing on kind/*, area/*, sub-area/* and priority/*.
6. Get the list of labels already on the issue using `gh issue view ISSUE_NUMBER --repo ${{ github.repository }} --json labels -t '{{range .labels}}{{.name}}{{"\n"}}{{end}}'
7. For area/* and kind/* limit yourself to only the single most applicable label in each case.
8. Give me a single short paragraph about why you are selecting each label in the process. use the format Issue ID: , Title, Label applied:, Label removed, ovearll explanation
9. Parse the JSON array from step 2 and for EACH INDIVIDUAL issue, apply appropriate labels using separate commands:
- `gh issue edit ISSUE_NUMBER --repo ${{ github.repository }} --add-label "label1"`
- `gh issue edit ISSUE_NUMBER --repo ${{ github.repository }} --add-label "label2"`
- Continue for each label separately
- IMPORTANT: Label each issue individually, one command per issue, one label at a time if needed.
- Make sure after you apply labels there is only one area/* and one kind/* label per issue.
- To do this look for labels found in step 6 that no longer apply remove them one at a time using
- `gh issue edit ISSUE_NUMBER --repo ${{ github.repository }} --remove-label "label-name1"`
- `gh issue edit ISSUE_NUMBER --repo ${{ github.repository }} --remove-label "label-name2"`
- IMPORTANT: Remove each label one at a time, one command per issue if needed.
10. For each issue please check if CLI version is present, this is usually in the output of the /about command and will look like 0.1.5
- Anything more than 6 versions older than the most recent should add the status/need-retesting label
11. If you see that the issue doesnt look like it has sufficient information recommend the status/need-information label
- After applying appropriate labels to an issue, remove the "status/need-triage" label if present: `gh issue edit ISSUE_NUMBER --repo ${{ github.repository }} --remove-label "status/need-triage"`
- Execute one `gh issue edit` command per issue, wait for success before proceeding to the next
Process each issue sequentially and confirm each labeling operation before moving to the next issue.
## Guidelines
- Only use labels that already exist in the repository.
- Do not add comments or modify the issue content.
- Do not remove labels titled help wanted or good first issue.
- Triage only the current issue.
- Apply only one area/ label
- Apply only one kind/ label (Do not apply kind/duplicate or kind/parent-issue)
- Apply all applicable sub-area/* and priority/* labels based on the issue content. It's ok to have multiple of these.
- Once you categorize the issue if it needs information bump down the priority by 1 eg.. a p0 would become a p1 a p1 would become a p2. P2 and P3 can stay as is in this scenario.
Categorization Guidelines:
P0: Critical / Blocker
- A P0 bug is a catastrophic failure that demands immediate attention. It represents a complete showstopper for a significant portion of users or for the development process itself.
Impact:
- Blocks development or testing for the entire team.
- Major security vulnerability that could compromise user data or system integrity.
- Causes data loss or corruption with no workaround.
- Crashes the application or makes a core feature completely unusable for all or most users in a production environment. Will it cause severe quality degration?
- Is it preventing contributors from contributing to the repository or is it a release blocker?
Qualifier: Is the main function of the software broken?
Example: The gemini auth login command fails with an unrecoverable error, preventing any user from authenticating and using the rest of the CLI.
P1: High
- A P1 bug is a serious issue that significantly degrades the user experience or impacts a core feature. While not a complete blocker, it's a major problem that needs a fast resolution.
- Feature requests are almost never P1.
Impact:
- A core feature is broken or behaving incorrectly for a large number of users or large number of use cases.
- Review the bug details and comments to try figure out if this issue affects a large set of use cases or if it's a narrow set of use cases.
- Severe performance degradation making the application frustratingly slow.
- No straightforward workaround exists, or the workaround is difficult and non-obvious.
Qualifier: Is a key feature unusable or giving very wrong results?
Example: The gemini -p "..." command consistently returns a malformed JSON response or an empty result, making the CLI's primary generation feature unreliable.
P2: Medium
- A P2 bug is a moderately impactful issue. It's a noticeable problem but doesn't prevent the use of the software's main functionality.
Impact:
- Affects a non-critical feature or a smaller, specific subset of users.
- An inconvenient but functional workaround is available and easy to execute.
- Noticeable UI/UX problems that don't break functionality but look unprofessional (e.g., elements are misaligned or overlapping).
Qualifier: Is it an annoying but non-blocking problem?
Example: An error message is unclear or contains a typo, causing user confusion but not halting their workflow.
P3: Low
- A P3 bug is a minor, low-impact issue that is trivial or cosmetic. It has little to no effect on the overall functionality of the application.
Impact:
- Minor cosmetic issues like color inconsistencies, typos in documentation, or slight alignment problems on a non-critical page.
- An edge-case bug that is very difficult to reproduce and affects a tiny fraction of users.
Qualifier: Is it a "nice-to-fix" issue?
Example: Spelling mistakes etc.
Additional Context:
- If users are talking about issues where the model gets downgraded from pro to flash then i want you to categorize that as a performance issue
- This product is designed to use different models eg.. using pro, downgrading to flash etc.
- When users report that they dont expect the model to change those would be categorized as feature requests.
Definition of Areas
area/ux:
- Issues concerning user-facing elements like command usability, interactive features, help docs, and perceived performance.
- I am seeing my screen flicker when using Gemini CLI
- I am seeing the output malformed
- Theme changes aren't taking effect
- My keyboard inputs arent' being recognzied
area/platform:
- Issues related to installation, packaging, OS compatibility (Windows, macOS, Linux), and the underlying CLI framework.
area/background: Issues related to long-running background tasks, daemons, and autonomous or proactive agent features.
area/models:
- i am not getting a response that is reasonable or expected. this can include things like
- I am calling a tool and the tool is not performing as expected.
- i am expecting a tool to be called and it is not getting called ,
- Including experience when using
- built-in tools (e.g., web search, code interpreter, read file, writefile, etc..),
- Function calling issues should be under this area
- i am getting responses from the model that are malformed.
- Issues concerning Gemini quality of response and inference,
- Issues talking about unnecessary token consumption.
- Issues talking about Model getting stuck in a loop be watchful as this could be the root cause for issues that otherwise seem like model performance issues.
- Memory compression
- unexpected responses,
- poor quality of generated code
area/tools:
- These are primarily issues related to Model Context Protocol
- These are issues that mention MCP support
- feature requests asking for support for new tools.
area/core:
- Issues with fundamental components like command parsing, configuration management, session state, and the main API client logic. Introducing multi-modality
area/contribution:
- Issues related to improving the developer contribution experience, such as CI/CD pipelines, build scripts, and test automation infrastructure.
area/authentication:
- Issues related to user identity, login flows, API key handling, credential storage, and access token management, unable to sign in selecting wrong authentication path etc..
area/security-privacy:
- Issues concerning vulnerability patching, dependency security, data sanitization, privacy controls, and preventing unauthorized data access.
area/extensibility:
- Issues related to the plugin system, extension APIs, or making the CLI's functionality available in other applications, github actions, ide support etc..
area/performance:
- Issues focused on model performance
- Issues with running out of capacity,
- 429 errors etc..
- could also pertain to latency,
- other general software performance like, memory usage, CPU consumption, and algorithmic efficiency.
- Switching models from one to the other unexpectedly.

View File

@@ -1,4 +1,4 @@
name: Gemini Scheduled PR Triage 🚀
name: Qwen Scheduled PR Triage 🚀
on:
schedule:
@@ -8,7 +8,7 @@ on:
jobs:
audit-prs:
timeout-minutes: 15
if: ${{ github.repository == 'google-gemini/gemini-cli' }}
if: ${{ github.repository == 'QwenLM/qwen-code' }}
permissions:
contents: read
id-token: write
@@ -21,16 +21,9 @@ jobs:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
- name: Generate GitHub App Token
id: generate_token
uses: actions/create-github-app-token@df432ceedc7162793a195dd1713ff69aefc7379e # v2
with:
app-id: ${{ secrets.APP_ID }}
private-key: ${{ secrets.PRIVATE_KEY }}
- name: Run PR Triage Script
id: run_triage
env:
GITHUB_TOKEN: ${{ steps.generate_token.outputs.token }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GITHUB_REPOSITORY: ${{ github.repository }}
run: ./.github/scripts/pr-triage.sh

32
.github/workflows/no-response.yml vendored Normal file
View File

@@ -0,0 +1,32 @@
name: No Response
# Run as a daily cron at 1:45 AM
on:
schedule:
- cron: '45 1 * * *'
workflow_dispatch: {}
jobs:
no-response:
runs-on: ubuntu-latest
if: ${{ github.repository == 'google-gemini/gemini-cli' }}
permissions:
issues: write
pull-requests: write
concurrency:
group: ${{ github.workflow }}-no-response
cancel-in-progress: true
steps:
- uses: actions/stale@5bef64f19d7facfb25b37b414482c7164d639639
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
days-before-stale: -1
days-before-close: 14
stale-issue-label: 'status/need-information'
close-issue-message: >
This issue was marked as needing more information and has not received a response in 14 days.
Closing it for now. If you still face this problem, feel free to reopen with more details. Thank you!
stale-pr-label: 'status/need-information'
close-pr-message: >
This pull request was marked as needing more information and has had no updates in 14 days.
Closing it for now. You are welcome to reopen with the required info. Thanks for contributing!

View File

@@ -0,0 +1,195 @@
name: 🧐 Qwen Pull Request Review
on:
pull_request_target:
types: [opened]
pull_request_review_comment:
types: [created]
pull_request_review:
types: [submitted]
workflow_dispatch:
inputs:
pr_number:
description: 'PR number to review'
required: true
type: number
jobs:
review-pr:
if: >
github.event_name == 'workflow_dispatch' ||
(github.event_name == 'pull_request_target' &&
github.event.action == 'opened' &&
(github.event.pull_request.author_association == 'OWNER' ||
github.event.pull_request.author_association == 'MEMBER' ||
github.event.pull_request.author_association == 'COLLABORATOR')) ||
(github.event_name == 'issue_comment' &&
github.event.issue.pull_request &&
contains(github.event.comment.body, '@qwen /review') &&
(github.event.comment.author_association == 'OWNER' ||
github.event.comment.author_association == 'MEMBER' ||
github.event.comment.author_association == 'COLLABORATOR')) ||
(github.event_name == 'pull_request_review_comment' &&
contains(github.event.comment.body, '@qwen /review') &&
(github.event.comment.author_association == 'OWNER' ||
github.event.comment.author_association == 'MEMBER' ||
github.event.comment.author_association == 'COLLABORATOR')) ||
(github.event_name == 'pull_request_review' &&
contains(github.event.review.body, '@qwen /review') &&
(github.event.review.author_association == 'OWNER' ||
github.event.review.author_association == 'MEMBER' ||
github.event.review.author_association == 'COLLABORATOR'))
timeout-minutes: 15
runs-on: ubuntu-latest
permissions:
contents: read
id-token: write
pull-requests: write
issues: write
steps:
- name: Checkout PR code
uses: actions/checkout@v4
with:
token: ${{ secrets.GITHUB_TOKEN }}
fetch-depth: 0
- name: Get PR details (pull_request_target & workflow_dispatch)
id: get_pr
if: github.event_name == 'pull_request_target' || github.event_name == 'workflow_dispatch'
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
if [ "${{ github.event_name }}" = "workflow_dispatch" ]; then
PR_NUMBER=${{ github.event.inputs.pr_number }}
else
PR_NUMBER=${{ github.event.pull_request.number }}
fi
echo "pr_number=$PR_NUMBER" >> "$GITHUB_OUTPUT"
# Get PR details
PR_DATA=$(gh pr view $PR_NUMBER --json title,body,additions,deletions,changedFiles,baseRefName,headRefName)
echo "pr_data=$PR_DATA" >> "$GITHUB_OUTPUT"
# Get file changes
CHANGED_FILES=$(gh pr diff $PR_NUMBER --name-only)
echo "changed_files<<EOF" >> "$GITHUB_OUTPUT"
echo "$CHANGED_FILES" >> "$GITHUB_OUTPUT"
echo "EOF" >> "$GITHUB_OUTPUT"
- name: Get PR details (issue_comment)
id: get_pr_comment
if: github.event_name == 'issue_comment'
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
COMMENT_BODY: ${{ github.event.comment.body }}
run: |
PR_NUMBER=${{ github.event.issue.number }}
echo "pr_number=$PR_NUMBER" >> "$GITHUB_OUTPUT"
# Extract additional instructions from comment
ADDITIONAL_INSTRUCTIONS=$(echo "$COMMENT_BODY" | sed 's/.*@qwen \/review//' | xargs)
echo "additional_instructions=$ADDITIONAL_INSTRUCTIONS" >> "$GITHUB_OUTPUT"
# Get PR details
PR_DATA=$(gh pr view $PR_NUMBER --json title,body,additions,deletions,changedFiles,baseRefName,headRefName)
echo "pr_data=$PR_DATA" >> "$GITHUB_OUTPUT"
# Get file changes
CHANGED_FILES=$(gh pr diff $PR_NUMBER --name-only)
echo "changed_files<<EOF" >> "$GITHUB_OUTPUT"
echo "$CHANGED_FILES" >> "$GITHUB_OUTPUT"
echo "EOF" >> "$GITHUB_OUTPUT"
- name: Run Qwen PR Review
uses: QwenLM/qwen-code-action@5fd6818d04d64e87d255ee4d5f77995e32fbf4c2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
PR_NUMBER: ${{ steps.get_pr.outputs.pr_number || steps.get_pr_comment.outputs.pr_number }}
PR_DATA: ${{ steps.get_pr.outputs.pr_data || steps.get_pr_comment.outputs.pr_data }}
CHANGED_FILES: ${{ steps.get_pr.outputs.changed_files || steps.get_pr_comment.outputs.changed_files }}
ADDITIONAL_INSTRUCTIONS: ${{ steps.get_pr.outputs.additional_instructions || steps.get_pr_comment.outputs.additional_instructions }}
REPOSITORY: ${{ github.repository }}
with:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
OPENAI_BASE_URL: ${{ secrets.OPENAI_BASE_URL }}
OPENAI_MODEL: ${{ secrets.OPENAI_MODEL }}
settings_json: |
{
"coreTools": [
"run_shell_command(echo)",
"run_shell_command(gh pr view)",
"run_shell_command(gh pr diff)",
"run_shell_command(gh pr comment)",
"run_shell_command(cat)",
"run_shell_command(head)",
"run_shell_command(tail)",
"run_shell_command(grep)",
"write_file"
],
"sandbox": false
}
prompt: |
You are an expert code reviewer. You have access to shell commands to gather PR information and perform the review.
IMPORTANT: Use the available shell commands to gather information. Do not ask for information to be provided.
Start by running these commands to gather the required data:
1. Run: echo "$PR_DATA" to get PR details (JSON format)
2. Run: echo "$CHANGED_FILES" to get the list of changed files
3. Run: echo "$PR_NUMBER" to get the PR number
4. Run: echo "$ADDITIONAL_INSTRUCTIONS" to see any specific review instructions from the user
5. Run: gh pr diff $PR_NUMBER to see the full diff
6. For any specific files, use: cat filename, head -50 filename, or tail -50 filename
Additional Review Instructions:
If ADDITIONAL_INSTRUCTIONS contains text, prioritize those specific areas or focus points in your review.
Common instruction examples: "focus on security", "check performance", "review error handling", "check for breaking changes"
Once you have the information, provide a comprehensive code review by:
1. Writing your review to a file: write_file("review.md", "<your detailed review feedback here>")
2. Posting the review: gh pr comment $PR_NUMBER --body-file review.md --repo $REPOSITORY
Review Areas:
- **Security**: Authentication, authorization, input validation, data sanitization
- **Performance**: Algorithms, database queries, caching, resource usage
- **Reliability**: Error handling, logging, testing coverage, edge cases
- **Maintainability**: Code structure, documentation, naming conventions
- **Functionality**: Logic correctness, requirements fulfillment
Output Format:
Structure your review using this exact format with markdown:
## 📋 Review Summary
Provide a brief 2-3 sentence overview of the PR and overall assessment.
## 🔍 General Feedback
- List general observations about code quality
- Mention overall patterns or architectural decisions
- Highlight positive aspects of the implementation
- Note any recurring themes across files
## 🎯 Specific Feedback
Only include sections below that have actual issues. If there are no issues in a priority category, omit that entire section.
### 🔴 Critical
(Only include this section if there are critical issues)
Issues that must be addressed before merging (security vulnerabilities, breaking changes, major bugs):
- **File: `filename:line`** - Description of critical issue with specific recommendation
### 🟡 High
(Only include this section if there are high priority issues)
Important issues that should be addressed (performance problems, design flaws, significant bugs):
- **File: `filename:line`** - Description of high priority issue with suggested fix
### 🟢 Medium
(Only include this section if there are medium priority issues)
Improvements that would enhance code quality (style issues, minor optimizations, better practices):
- **File: `filename:line`** - Description of medium priority improvement
### 🔵 Low
(Only include this section if there are suggestions)
Nice-to-have improvements and suggestions (documentation, naming, minor refactoring):
- **File: `filename:line`** - Description of suggestion or enhancement
**Note**: If no specific issues are found in any category, simply state "No specific issues identified in this review."
## ✅ Highlights
(Only include this section if there are positive aspects to highlight)
- Mention specific good practices or implementations
- Acknowledge well-written code sections
- Note improvements from previous versions

View File

@@ -84,6 +84,11 @@ jobs:
echo "RELEASE_TAG=$(echo $VERSION_JSON | jq -r .releaseTag)" >> $GITHUB_OUTPUT
echo "RELEASE_VERSION=$(echo $VERSION_JSON | jq -r .releaseVersion)" >> $GITHUB_OUTPUT
echo "NPM_TAG=$(echo $VERSION_JSON | jq -r .npmTag)" >> $GITHUB_OUTPUT
# Get the previous tag for release notes generation
CURRENT_TAG=$(echo $VERSION_JSON | jq -r .releaseTag)
PREVIOUS_TAG=$(node scripts/get-previous-tag.js "$CURRENT_TAG" || echo "")
echo "PREVIOUS_TAG=${PREVIOUS_TAG}" >> $GITHUB_OUTPUT
env:
IS_NIGHTLY: ${{ steps.vars.outputs.is_nightly }}
MANUAL_VERSION: ${{ inputs.version }}
@@ -158,11 +163,20 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
RELEASE_BRANCH: ${{ steps.release_branch.outputs.BRANCH_NAME }}
run: |
gh release create ${{ steps.version.outputs.RELEASE_TAG }} \
bundle/gemini.js \
--target "$RELEASE_BRANCH" \
--title "Release ${{ steps.version.outputs.RELEASE_TAG }}" \
--generate-notes
# Build the gh release create command with appropriate options
RELEASE_CMD="gh release create ${{ steps.version.outputs.RELEASE_TAG }} bundle/gemini.js --target \"$RELEASE_BRANCH\" --title \"Release ${{ steps.version.outputs.RELEASE_TAG }}\""
# Add previous tag for release notes if available
if [[ -n "${{ steps.version.outputs.PREVIOUS_TAG }}" ]]; then
echo "Generating release notes from previous tag: ${{ steps.version.outputs.PREVIOUS_TAG }}"
RELEASE_CMD="$RELEASE_CMD --generate-notes --notes-start-tag ${{ steps.version.outputs.PREVIOUS_TAG }}"
else
echo "No previous tag found, generating release notes from repository history"
RELEASE_CMD="$RELEASE_CMD --generate-notes"
fi
# Execute the release command
eval $RELEASE_CMD
- name: Create Issue on Failure
if: failure()

38
.github/workflows/stale.yml vendored Normal file
View File

@@ -0,0 +1,38 @@
name: Mark stale issues and pull requests
# Run as a daily cron at 1:30 AM
on:
schedule:
- cron: '30 1 * * *'
workflow_dispatch: {}
jobs:
stale:
runs-on: ubuntu-latest
if: ${{ github.repository == 'google-gemini/gemini-cli' }}
permissions:
issues: write
pull-requests: write
concurrency:
group: ${{ github.workflow }}-stale
cancel-in-progress: true
steps:
- uses: actions/stale@5bef64f19d7facfb25b37b414482c7164d639639
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-message: >
This issue has been automatically marked as stale due to 60 days of inactivity.
It will be closed in 14 days if no further activity occurs.
stale-pr-message: >
This pull request has been automatically marked as stale due to 60 days of inactivity.
It will be closed in 14 days if no further activity occurs.
close-issue-message: >
This issue has been closed due to 14 additional days of inactivity after being marked as stale.
If you believe this is still relevant, feel free to comment or reopen the issue. Thank you!
close-pr-message: >
This pull request has been closed due to 14 additional days of inactivity after being marked as stale.
If this is still relevant, you are welcome to reopen or leave a comment. Thanks for contributing!
days-before-stale: 60
days-before-close: 14
exempt-issue-labels: pinned,security
exempt-pr-labels: pinned,security

2
.npmrc
View File

@@ -1 +1 @@
@google:registry=https://wombat-dressing-room.appspot.com
registry=https://registry.npmjs.org

3
.vscode/extensions.json vendored Normal file
View File

@@ -0,0 +1,3 @@
{
"recommendations": ["vitest.explorer", "esbenp.prettier-vscode"]
}

2
.vscode/launch.json vendored
View File

@@ -50,7 +50,7 @@
"type": "node",
// fix source mapping when debugging in sandbox using global installation
// note this does not interfere when remoteRoot is also ${workspaceFolder}/packages
"remoteRoot": "/usr/local/share/npm-global/lib/node_modules/@gemini-cli",
"remoteRoot": "/usr/local/share/npm-global/lib/node_modules/@qwen-code",
"localRoot": "${workspaceFolder}/packages"
},
{

15
.vscode/settings.json vendored
View File

@@ -1,3 +1,16 @@
{
"typescript.tsserver.experimental.enableProjectDiagnostics": true
"typescript.tsserver.experimental.enableProjectDiagnostics": true,
"editor.tabSize": 2,
"editor.rulers": [80],
"editor.detectIndentation": false,
"editor.insertSpaces": true,
"[typescript]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[json]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[javascript]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
}
}

26
CHANGELOG.md Normal file
View File

@@ -0,0 +1,26 @@
# Changelog
## 0.0.7
- Fix MCP tools
- Fix Web Fetch tool
- Fix Web Search tool, by replacing web search from Google/Gemini to Tavily API
- Fix: Compatible with occasional tool call parameters returned by LLM that are invalid JSON
- Fix: prevent concurrent query submissions on some rare cases
- Fix: incorrect qwen logger exit handler setup
- Fix: seperate static QR code and dynamic spin components
- Sync gemini-cli to v0.1.18
## 0.0.6
- Add usage statistics logging for Qwen integration
- Make `/init` command respect configured context filename and align docs with QWEN.md
- Fix EPERM error when run `qwen --sandbox` in macOS
- Fix terminal flicker when waiting for login
- Fix `glm-4.5` model request error
## 0.0.5
- Support Qwen OAuth login and provide up to 2000 free requests per day
- Sync gemini-cli to v0.1.17
- Add systemPromptMappings Configuration Feature

View File

@@ -242,6 +242,8 @@ To hit a breakpoint inside the sandbox container run:
DEBUG=1 gemini
```
**Note:** If you have `DEBUG=true` in a project's `.env` file, it won't affect gemini-cli due to automatic exclusion. Use `.gemini/.env` files for gemini-cli specific debug settings.
### React DevTools
To debug the CLI's React-based UI, you can use React DevTools. Ink, the library used for the CLI's interface, is compatible with React DevTools version 4.x.

View File

@@ -1,3 +1,31 @@
# Build stage
FROM docker.io/library/node:20-slim AS builder
# Install build dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
python3 \
make \
g++ \
git \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Set up npm global package folder
RUN mkdir -p /usr/local/share/npm-global
ENV NPM_CONFIG_PREFIX=/usr/local/share/npm-global
ENV PATH=$PATH:/usr/local/share/npm-global/bin
# Copy source code
COPY . /home/node/app
WORKDIR /home/node/app
# Install dependencies and build packages
RUN npm ci \
&& npm run build --workspaces \
&& npm pack -w @qwen-code/qwen-code --pack-destination ./packages/cli/dist \
&& npm pack -w @qwen-code/qwen-code-core --pack-destination ./packages/core/dist
# Runtime stage
FROM docker.io/library/node:20-slim
ARG SANDBOX_NAME="qwen-code-sandbox"
@@ -5,11 +33,9 @@ ARG CLI_VERSION_ARG
ENV SANDBOX="$SANDBOX_NAME"
ENV CLI_VERSION=$CLI_VERSION_ARG
# install minimal set of packages, then clean up
# Install runtime dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
python3 \
make \
g++ \
man-db \
curl \
dnsutils \
@@ -29,22 +55,19 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# set up npm global package folder under /usr/local/share
# give it to non-root user node, already set up in base image
RUN mkdir -p /usr/local/share/npm-global \
&& chown -R node:node /usr/local/share/npm-global
# Set up npm global package folder
RUN mkdir -p /usr/local/share/npm-global
ENV NPM_CONFIG_PREFIX=/usr/local/share/npm-global
ENV PATH=$PATH:/usr/local/share/npm-global/bin
# switch to non-root user node
USER node
# Copy built packages from builder stage
COPY --from=builder /home/node/app/packages/cli/dist/*.tgz /tmp/
COPY --from=builder /home/node/app/packages/core/dist/*.tgz /tmp/
# install qwen-code and clean up
COPY packages/cli/dist/qwen-code-*.tgz /usr/local/share/npm-global/qwen-code.tgz
COPY packages/core/dist/qwen-code-qwen-code-core-*.tgz /usr/local/share/npm-global/qwen-code-core.tgz
RUN npm install -g /usr/local/share/npm-global/qwen-code.tgz /usr/local/share/npm-global/qwen-code-core.tgz \
# Install built packages globally
RUN npm install -g /tmp/*.tgz \
&& npm cache clean --force \
&& rm -f /usr/local/share/npm-global/qwen-{code,code-core}.tgz
&& rm -rf /tmp/*.tgz
# default entrypoint when none specified
CMD ["qwen"]
# Default entrypoint when none specified
CMD ["qwen"]

View File

@@ -188,6 +188,7 @@
identification within third-party archives.
Copyright 2025 Google LLC
Copyright 2025 Qwen
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

View File

@@ -53,7 +53,7 @@ debug:
run-npx:
npx https://github.com/google-gemini/gemini-cli
npx https://github.com/QwenLM/qwen-code
create-alias:
scripts/create_alias.sh

View File

@@ -15,12 +15,43 @@
</div>
<div align="center">
<!-- Keep these links. Translations will automatically update with the README. -->
<a href="https://readme-i18n.com/de/QwenLM/qwen-code">Deutsch</a> |
<a href="https://readme-i18n.com/es/QwenLM/qwen-code">Español</a> |
<a href="https://readme-i18n.com/fr/QwenLM/qwen-code">français</a> |
<a href="https://readme-i18n.com/ja/QwenLM/qwen-code">日本語</a> |
<a href="https://readme-i18n.com/ko/QwenLM/qwen-code">한국어</a> |
<a href="https://readme-i18n.com/pt/QwenLM/qwen-code">Português</a> |
<a href="https://readme-i18n.com/ru/QwenLM/qwen-code">Русский</a> |
<a href="https://readme-i18n.com/zh/QwenLM/qwen-code">中文</a>
</div>
Qwen Code is a powerful command-line AI workflow tool adapted from [**Gemini CLI**](https://github.com/google-gemini/gemini-cli) ([details](./README.gemini.md)), specifically optimized for [Qwen3-Coder](https://github.com/QwenLM/Qwen3-Coder) models. It enhances your development workflow with advanced code understanding, automated tasks, and intelligent assistance.
## 💡 Free Options Available
Get started with Qwen Code at no cost using any of these free options:
### 🔥 Qwen OAuth (Recommended)
- **2,000 requests per day** with no token limits
- **60 requests per minute** rate limit
- Simply run `qwen` and authenticate with your qwen.ai account
- Automatic credential management and refresh
- Use `/auth` command to switch to Qwen OAuth if you have initialized with OpenAI compatible mode
### 🌏 Regional Free Tiers
- **Mainland China**: ModelScope offers **2,000 free API calls per day**
- **International**: OpenRouter provides **up to 1,000 free API calls per day** worldwide
For detailed setup instructions, see [Authorization](#authorization).
> [!WARNING]
> **Token Usage Notice**: Qwen Code may issue multiple API calls per cycle, resulting in higher token usage (similar to Claude Code). We're actively optimizing API efficiency.
>
> 💡 **Free Option**: ModelScope provides **2,000 free API calls per day** for users in mainland China. OpenRouter offers up to **1,000 free API calls per day** worldwide. For setup instructions, see [API Configuration](#api-configuration).
## Key Features
@@ -84,15 +115,43 @@ Create or edit `.qwen/settings.json` in your home directory:
- **`/compress`** - Compress conversation history to continue within token limits
- **`/clear`** - Clear all conversation history and start fresh
- **`/status`** - Check current token usage and limits
- **`/stats`** - Check current token usage and limits
> 📝 **Note**: Session token limit applies to a single conversation, not cumulative API calls.
### API Configuration
### Authorization
Qwen Code supports multiple API providers. You can configure your API key through environment variables or a `.env` file in your project root.
Choose your preferred authentication method based on your needs:
#### Configuration Methods
#### 1. Qwen OAuth (🚀 Recommended - Start in 30 seconds)
The easiest way to get started - completely free with generous quotas:
```bash
# Just run this command and follow the browser authentication
qwen
```
**What happens:**
1. **Instant Setup**: CLI opens your browser automatically
2. **One-Click Login**: Authenticate with your qwen.ai account
3. **Automatic Management**: Credentials cached locally for future use
4. **No Configuration**: Zero setup required - just start coding!
**Free Tier Benefits:**
-**2,000 requests/day** (no token counting needed)
-**60 requests/minute** rate limit
-**Automatic credential refresh**
-**Zero cost** for individual users
- **Note**: Model fallback may occur to maintain service quality
#### 2. OpenAI-Compatible API
Use API keys for OpenAI or other compatible providers:
**Configuration Methods:**
1. **Environment Variables**
@@ -110,7 +169,7 @@ Qwen Code supports multiple API providers. You can configure your API key throug
OPENAI_MODEL=your_model_choice
```
#### API Provider Options
**API Provider Options**
> ⚠️ **Regional Notice:**
>
@@ -265,7 +324,7 @@ qwen
- `/help` - Display available commands
- `/clear` - Clear conversation history
- `/compress` - Compress history to save tokens
- `/status` - Show current session information
- `/stats` - Show current session information
- `/exit` or `/quit` - Exit Qwen Code
### Keyboard Shortcuts
@@ -287,6 +346,8 @@ qwen
See [CONTRIBUTING.md](./CONTRIBUTING.md) to learn how to contribute to the project.
For detailed authentication setup, see the [authentication guide](./docs/cli/authentication.md).
## Troubleshooting
If you encounter issues, check the [troubleshooting guide](docs/troubleshooting.md).

View File

@@ -1,4 +1,4 @@
# Gemini CLI Roadmap
# Qwen CLI Roadmap
The [Official Gemini CLI Roadmap](https://github.com/orgs/google-gemini/projects/11/)
@@ -56,7 +56,7 @@ find initiatives that interest you.
Gemini CLI is an open-source project, and we welcome contributions from the community! Whether you're a developer, a designer, or just an enthusiastic user you can find our [Community Guidelines here](https://github.com/google-gemini/gemini-cli/blob/main/CONTRIBUTING.md) to learn how to get started. There are many ways to get involved:
- **Roadmap:** Please review and find areas in our [roadmap](https://github.com/google-gemini/gemini-cli/issues/4191) that you would like to contribute to. Contributions based on this will be easiest to integrate with.
- **Report Bugs:** If you find an issue, please create a bug(https://github.com/google-gemini/gemini-cli/issues/new?template=bug_report.yml) with as much detail as possible. If you believe it is a critical breaking issue preventing direct CLI usage, please tag it as `priorty/p0`.
- **Report Bugs:** If you find an issue, please create a [bug](https://github.com/google-gemini/gemini-cli/issues/new?template=bug_report.yml) with as much detail as possible. If you believe it is a critical breaking issue preventing direct CLI usage, please tag it as `priority/p0`.
- **Suggest Features:** Have a great idea? We'd love to hear it! Open a [feature request](https://github.com/google-gemini/gemini-cli/issues/new?template=feature_request.yml).
- **Contribute Code:** Check out our [CONTRIBUTING.md](https://github.com/google-gemini/gemini-cli/blob/main/CONTRIBUTING.md) file for guidelines on how to submit pull requests. We have a list of "good first issues" for new contributors.
- **Write Documentation:** Help us improve our documentation, tutorials, and examples.

5
SECURITY.md Normal file
View File

@@ -0,0 +1,5 @@
# Reporting Security Issues
Please report any security issue or Higress crash report to [ASRC](https://security.alibaba.com/) (Alibaba Security Response Center) where the issue will be triaged appropriately.
Thank you for helping keep our project secure.

View File

@@ -1,104 +1,93 @@
# Authentication Setup
The Gemini CLI requires you to authenticate with Google's AI services. On initial startup you'll need to configure **one** of the following authentication methods:
Qwen Code supports two main authentication methods to access AI models. Choose the method that best fits your use case:
1. **Login with Google (Gemini Code Assist):**
- Use this option to log in with your google account.
- During initial startup, Gemini CLI will direct you to a webpage for authentication. Once authenticated, your credentials will be cached locally so the web login can be skipped on subsequent runs.
- Note that the web login must be done in a browser that can communicate with the machine Gemini CLI is being run from. (Specifically, the browser will be redirected to a localhost url that Gemini CLI will be listening on).
- <a id="workspace-gca">Users may have to specify a GOOGLE_CLOUD_PROJECT if:</a>
1. You have a Google Workspace account. Google Workspace is a paid service for businesses and organizations that provides a suite of productivity tools, including a custom email domain (e.g. your-name@your-company.com), enhanced security features, and administrative controls. These accounts are often managed by an employer or school.
1. You have received a Gemini Code Assist license through the [Google Developer Program](https://developers.google.com/program/plans-and-pricing) (including qualified Google Developer Experts)
1. You have been assigned a license to a current Gemini Code Assist standard or enterprise subscription.
1. You are using the product outside the [supported regions](https://developers.google.com/gemini-code-assist/resources/available-locations) for free individual usage.
1. You are a Google account holder under the age of 18
- If you fall into one of these categories, you must first configure a Google Cloud Project ID to use, [enable the Gemini for Cloud API](https://cloud.google.com/gemini/docs/discover/set-up-gemini#enable-api) and [configure access permissions](https://cloud.google.com/gemini/docs/discover/set-up-gemini#grant-iam).
1. **Qwen OAuth (Recommended):**
- Use this option to log in with your qwen.ai account.
- During initial startup, Qwen Code will direct you to the qwen.ai authentication page. Once authenticated, your credentials will be cached locally so the web login can be skipped on subsequent runs.
- **Requirements:**
- Valid qwen.ai account
- Internet connection for initial authentication
- **Benefits:**
- Seamless access to Qwen models
- Automatic credential refresh
- No manual API key management required
You can temporarily set the environment variable in your current shell session using the following command:
**Getting Started:**
```bash
export GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID"
```
```bash
# Start Qwen Code and follow the OAuth flow
qwen
```
- For repeated use, you can add the environment variable to your [.env file](#persisting-environment-variables-with-env-files) or your shell's configuration file (like `~/.bashrc`, `~/.zshrc`, or `~/.profile`). For example, the following command adds the environment variable to a `~/.bashrc` file:
The CLI will automatically open your browser and guide you through the authentication process.
```bash
echo 'export GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID"' >> ~/.bashrc
source ~/.bashrc
```
**For users who authenticate using their qwen.ai account:**
2. **<a id="gemini-api-key"></a>Gemini API key:**
- Obtain your API key from Google AI Studio: [https://aistudio.google.com/app/apikey](https://aistudio.google.com/app/apikey)
- Set the `GEMINI_API_KEY` environment variable. In the following methods, replace `YOUR_GEMINI_API_KEY` with the API key you obtained from Google AI Studio:
- You can temporarily set the environment variable in your current shell session using the following command:
```bash
export GEMINI_API_KEY="YOUR_GEMINI_API_KEY"
```
- For repeated use, you can add the environment variable to your [.env file](#persisting-environment-variables-with-env-files).
**Quota:**
- 60 requests per minute
- 2,000 requests per day
- Token usage is not applicable
- Alternatively you can export the API key from your shell's configuration file (like `~/.bashrc`, `~/.zshrc`, or `~/.profile`). For example, the following command adds the environment variable to a `~/.bashrc` file:
**Cost:** Free
```bash
echo 'export GEMINI_API_KEY="YOUR_GEMINI_API_KEY"' >> ~/.bashrc
source ~/.bashrc
```
**Notes:** A specific quota for different models is not specified; model fallback may occur to preserve shared experience quality.
:warning: Be advised that when you export your API key inside your shell configuration file, any other process executed from the shell can read it.
2. **<a id="openai-api"></a>OpenAI-Compatible API:**
- Use API keys for OpenAI or other compatible providers.
- This method allows you to use various AI models through API keys.
3. **Vertex AI:**
- Obtain your Google Cloud API key: [Get an API Key](https://cloud.google.com/vertex-ai/generative-ai/docs/start/api-keys?usertype=newuser)
- Set the `GOOGLE_API_KEY` environment variable. In the following methods, replace `YOUR_GOOGLE_API_KEY` with your Vertex AI API key:
- You can temporarily set these environment variables in your current shell session using the following commands:
```bash
export GOOGLE_API_KEY="YOUR_GOOGLE_API_KEY"
```
- For repeated use, you can add the environment variables to your [.env file](#persisting-environment-variables-with-env-files) or your shell's configuration file (like `~/.bashrc`, `~/.zshrc`, or `~/.profile`). For example, the following commands add the environment variables to a `~/.bashrc` file:
```bash
echo 'export GOOGLE_API_KEY="YOUR_GOOGLE_API_KEY"' >> ~/.bashrc
source ~/.bashrc
```
- To use Application Default Credentials (ADC), use the following command:
- Ensure you have a Google Cloud project and have enabled the Vertex AI API.
```bash
gcloud auth application-default login
```
For more information, see [Set up Application Default Credentials for Google Cloud](https://cloud.google.com/docs/authentication/provide-credentials-adc).
- Set the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` environment variables. In the following methods, replace `YOUR_PROJECT_ID` and `YOUR_PROJECT_LOCATION` with the relevant values for your project:
- You can temporarily set these environment variables in your current shell session using the following commands:
```bash
export GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID"
export GOOGLE_CLOUD_LOCATION="YOUR_PROJECT_LOCATION" # e.g., us-central1
```
- For repeated use, you can add the environment variables to your [.env file](#persisting-environment-variables-with-env-files)
**Configuration Methods:**
- Alternatively you can export the environment variables from your shell's configuration file (like `~/.bashrc`, `~/.zshrc`, or `~/.profile`). For example, the following commands add the environment variables to a `~/.bashrc` file:
a) **Environment Variables:**
```bash
echo 'export GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID"' >> ~/.bashrc
echo 'export GOOGLE_CLOUD_LOCATION="YOUR_PROJECT_LOCATION"' >> ~/.bashrc
source ~/.bashrc
```
```bash
export OPENAI_API_KEY="your_api_key_here"
export OPENAI_BASE_URL="your_api_endpoint" # Optional
export OPENAI_MODEL="your_model_choice" # Optional
```
:warning: Be advised that when you export your API key inside your shell configuration file, any other process executed from the shell can read it.
b) **Project `.env` File:**
Create a `.env` file in your project root:
4. **Cloud Shell:**
- This option is only available when running in a Google Cloud Shell environment.
- It automatically uses the credentials of the logged-in user in the Cloud Shell environment.
- This is the default authentication method when running in Cloud Shell and no other method is configured.
```env
OPENAI_API_KEY=your_api_key_here
OPENAI_BASE_URL=your_api_endpoint
OPENAI_MODEL=your_model_choice
```
:warning: Be advised that when you export your API key inside your shell configuration file, any other process executed from the shell can read it.
**Supported Providers:**
- OpenAI (https://platform.openai.com/api-keys)
- Alibaba Cloud Bailian
- ModelScope
- OpenRouter
- Azure OpenAI
- Any OpenAI-compatible API
## Switching Authentication Methods
To switch between authentication methods during a session, use the `/auth` command in the CLI interface:
```bash
# Within the CLI, type:
/auth
```
This will allow you to reconfigure your authentication method without restarting the application.
### Persisting Environment Variables with `.env` Files
You can create a **`.gemini/.env`** file in your project directory or in your home directory. Creating a plain **`.env`** file also works, but `.gemini/.env` is recommended to keep Gemini variables isolated from other tools.
You can create a **`.qwen/.env`** file in your project directory or in your home directory. Creating a plain **`.env`** file also works, but `.qwen/.env` is recommended to keep Qwen Code variables isolated from other tools.
Gemini CLI automatically loads environment variables from the **first** `.env` file it finds, using the following search order:
**Important:** Some environment variables (like `DEBUG` and `DEBUG_MODE`) are automatically excluded from project `.env` files to prevent interference with qwen-code behavior. Use `.qwen/.env` files for qwen-code specific variables.
Qwen Code automatically loads environment variables from the **first** `.env` file it finds, using the following search order:
1. Starting in the **current directory** and moving upward toward `/`, for each directory it checks:
1. `.gemini/.env`
1. `.qwen/.env`
2. `.env`
2. If no file is found, it falls back to your **home directory**:
- `~/.gemini/.env`
- `~/.qwen/.env`
- `~/.env`
> **Important:** The search stops at the **first** file encountered—variables are **not merged** across multiple files.
@@ -108,37 +97,47 @@ Gemini CLI automatically loads environment variables from the **first** `.env` f
**Project-specific overrides** (take precedence when you are inside the project):
```bash
mkdir -p .gemini
echo 'GOOGLE_CLOUD_PROJECT="your-project-id"' >> .gemini/.env
mkdir -p .qwen
cat >> .qwen/.env <<'EOF'
OPENAI_API_KEY="your-api-key"
OPENAI_BASE_URL="https://api-inference.modelscope.cn/v1"
OPENAI_MODEL="Qwen/Qwen3-Coder-480B-A35B-Instruct"
EOF
```
**User-wide settings** (available in every directory):
```bash
mkdir -p ~/.gemini
cat >> ~/.gemini/.env <<'EOF'
GOOGLE_CLOUD_PROJECT="your-project-id"
GEMINI_API_KEY="your-gemini-api-key"
mkdir -p ~/.qwen
cat >> ~/.qwen/.env <<'EOF'
OPENAI_API_KEY="your-api-key"
OPENAI_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
OPENAI_MODEL="qwen3-coder-plus"
EOF
```
## Non-Interactive Mode / Headless Environments
When running the Gemini CLI in a non-interactive environment, you cannot use the interactive login flow.
When running Qwen Code in a non-interactive environment, you cannot use the OAuth login flow.
Instead, you must configure authentication using environment variables.
The CLI will automatically detect if it is running in a non-interactive terminal and will use one of the
following authentication methods if available:
The CLI will automatically detect if it is running in a non-interactive terminal and will use the
OpenAI-compatible API method if configured:
1. **Gemini API Key:**
- Set the `GEMINI_API_KEY` environment variable.
- The CLI will use this key to authenticate with the Gemini API.
1. **OpenAI-Compatible API:**
- Set the `OPENAI_API_KEY` environment variable.
- Optionally set `OPENAI_BASE_URL` and `OPENAI_MODEL` for custom endpoints.
- The CLI will use these credentials to authenticate with the API provider.
2. **Vertex AI:**
- Set the `GOOGLE_GENAI_USE_VERTEXAI=true` environment variable.
- **Using an API Key:** Set the `GOOGLE_API_KEY` environment variable.
- **Using Application Default Credentials (ADC):**
- Run `gcloud auth application-default login` in your environment to configure ADC.
- Ensure the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` environment variables are set.
**Example for headless environments:**
If none of these environment variables are set in a non-interactive session, the CLI will exit with an error.
```bash
export OPENAI_API_KEY="your-api-key"
export OPENAI_BASE_URL="https://api-inference.modelscope.cn/v1"
export OPENAI_MODEL="Qwen/Qwen3-Coder-480B-A35B-Instruct"
# Run Qwen Code
qwen
```
If no API key is set in a non-interactive session, the CLI will exit with an error prompting you to configure authentication.

View File

@@ -17,11 +17,19 @@ Slash commands provide meta-level control over the CLI itself.
- **`save`**
- **Description:** Saves the current conversation history. You must add a `<tag>` for identifying the conversation state.
- **Usage:** `/chat save <tag>`
- **Details on Checkpoint Location:** The default locations for saved chat checkpoints are:
- Linux/macOS: `~/.config/google-generative-ai/checkpoints/`
- Windows: `C:\Users\<YourUsername>\AppData\Roaming\google-generative-ai\checkpoints\`
- When you run `/chat list`, the CLI only scans these specific directories to find available checkpoints.
- **Note:** These checkpoints are for manually saving and resuming conversation states. For automatic checkpoints created before file modifications, see the [Checkpointing documentation](../checkpointing.md).
- **`resume`**
- **Description:** Resumes a conversation from a previous save.
- **Usage:** `/chat resume <tag>`
- **`list`**
- **Description:** Lists available tags for chat state resumption.
- **`delete`**
- **Description:** Deletes a saved conversation checkpoint.
- **Usage:** `/chat delete <tag>`
- **`/clear`**
- **Description:** Clear the terminal screen, including the visible session history and scrollback within the CLI. The underlying session data (for history recall) might be preserved depending on the exact implementation, but the visual display is cleared.
@@ -33,6 +41,28 @@ Slash commands provide meta-level control over the CLI itself.
- **`/copy`**
- **Description:** Copies the last output produced by Qwen Code to your clipboard, for easy sharing or reuse.
- **`/directory`** (or **`/dir`**)
- **Description:** Manage workspace directories for multi-directory support.
- **Sub-commands:**
- **`add`**:
- **Description:** Add a directory to the workspace. The path can be absolute or relative to the current working directory. Moreover, the reference from home directory is supported as well.
- **Usage:** `/directory add <path1>,<path2>`
- **Note:** Disabled in restrictive sandbox profiles. If you're using that, use `--include-directories` when starting the session instead.
- **`show`**:
- **Description:** Display all directories added by `/directory add` and `--include-directories`.
- **Usage:** `/directory show`
- **`/directory`** (or **`/dir`**)
- **Description:** Manage workspace directories for multi-directory support.
- **Sub-commands:**
- **`add`**:
- **Description:** Add a directory to the workspace. The path can be absolute or relative to the current working directory. Moreover, the reference from home directory is supported as well.
- **Usage:** `/directory add <path1>,<path2>`
- **Note:** Disabled in restrictive sandbox profiles. If you're using that, use `--include-directories` when starting the session instead.
- **`show`**:
- **Description:** Display all directories added by `/directory add` and `--include-directories`.
- **Usage:** `/directory show`
- **`/editor`**
- **Description:** Open a dialog for selecting supported editors.
@@ -54,21 +84,26 @@ Slash commands provide meta-level control over the CLI itself.
- **Keyboard Shortcut:** Press **Ctrl+T** at any time to toggle between showing and hiding tool descriptions.
- **`/memory`**
- **Description:** Manage the AI's instructional context (hierarchical memory loaded from `GEMINI.md` files).
- **Description:** Manage the AI's instructional context (hierarchical memory loaded from `QWEN.md` files by default; configurable via `contextFileName`).
- **Sub-commands:**
- **`add`**:
- **Description:** Adds the following text to the AI's memory. Usage: `/memory add <text to remember>`
- **`show`**:
- **Description:** Display the full, concatenated content of the current hierarchical memory that has been loaded from all `GEMINI.md` files. This lets you inspect the instructional context being provided to the Gemini model.
- **Description:** Display the full, concatenated content of the current hierarchical memory that has been loaded from all context files (e.g., `QWEN.md`). This lets you inspect the instructional context being provided to the model.
- **`refresh`**:
- **Description:** Reload the hierarchical instructional memory from all `GEMINI.md` files found in the configured locations (global, project/ancestors, and sub-directories). This command updates the model with the latest `GEMINI.md` content.
- **Note:** For more details on how `GEMINI.md` files contribute to hierarchical memory, see the [CLI Configuration documentation](./configuration.md#4-geminimd-files-hierarchical-instructional-context).
- **Description:** Reload the hierarchical instructional memory from all context files (default: `QWEN.md`) found in the configured locations (global, project/ancestors, and sub-directories). This updates the model with the latest context content.
- **Note:** For more details on how context files contribute to hierarchical memory, see the [CLI Configuration documentation](./configuration.md#context-files-hierarchical-instructional-context).
- **`/restore`**
- **Description:** Restores the project files to the state they were in just before a tool was executed. This is particularly useful for undoing file edits made by a tool. If run without a tool call ID, it will list available checkpoints to restore from.
- **Usage:** `/restore [tool_call_id]`
- **Note:** Only available if the CLI is invoked with the `--checkpointing` option or configured via [settings](./configuration.md). See [Checkpointing documentation](../checkpointing.md) for more details.
- **`/settings`**
- **Description:** Open the settings editor to view and modify Gemini CLI settings.
- **Details:** This command provides a user-friendly interface for changing settings that control the behavior and appearance of Gemini CLI. It is equivalent to manually editing the `.gemini/settings.json` file, but with validation and guidance to prevent errors.
- **Usage:** Simply run `/settings` and the editor will open. You can then browse or search for specific settings, view their current values, and modify them as desired. Changes to some settings are applied immediately, while others require a restart.
- **`/stats`**
- **Description:** Display detailed statistics for the current Qwen Code session, including token usage, cached token savings (when available), and session duration. Note: Cached token information is only displayed when cached tokens are being used, which occurs with API key authentication but not with OAuth authentication at this time.
@@ -106,6 +141,9 @@ Slash commands provide meta-level control over the CLI itself.
- **Persistent setting:** Vim mode preference is saved to `~/.gemini/settings.json` and restored between sessions
- **Status indicator:** When enabled, shows `[NORMAL]` or `[INSERT]` in the footer
- **`/init`**
- **Description:** Analyzes the current directory and creates a `QWEN.md` context file by default (or the filename specified by `contextFileName`). If a non-empty file already exists, no changes are made. The command seeds an empty file and prompts the model to populate it with project-specific instructions.
### Custom Commands
For a quick start, see the [example](#example-a-pure-function-refactoring-command) below.
@@ -234,7 +272,7 @@ Please generate a Conventional Commit message based on the following git diff:
```diff
!{git diff --staged}
````
```
"""
@@ -255,7 +293,7 @@ First, ensure the user commands directory exists, then create a `refactor` subdi
```bash
mkdir -p ~/.gemini/commands/refactor
touch ~/.gemini/commands/refactor/pure.toml
````
```
**2. Add the content to the file:**

View File

@@ -38,8 +38,8 @@ In addition to a project settings file, a project's `.gemini` directory can cont
### Available settings in `settings.json`:
- **`contextFileName`** (string or array of strings):
- **Description:** Specifies the filename for context files (e.g., `GEMINI.md`, `AGENTS.md`). Can be a single filename or a list of accepted filenames.
- **Default:** `GEMINI.md`
- **Description:** Specifies the filename for context files (e.g., `QWEN.md`, `AGENTS.md`). Can be a single filename or a list of accepted filenames.
- **Default:** `QWEN.md`
- **Example:** `"contextFileName": "AGENTS.md"`
- **`bugCommand`** (object):
@@ -240,6 +240,58 @@ In addition to a project settings file, a project's `.gemini` directory can cont
}
```
- **`excludedProjectEnvVars`** (array of strings):
- **Description:** Specifies environment variables that should be excluded from being loaded from project `.env` files. This prevents project-specific environment variables (like `DEBUG=true`) from interfering with gemini-cli behavior. Variables from `.gemini/.env` files are never excluded.
- **Default:** `["DEBUG", "DEBUG_MODE"]`
- **Example:**
```json
"excludedProjectEnvVars": ["DEBUG", "DEBUG_MODE", "NODE_ENV"]
```
- **`includeDirectories`** (array of strings):
- **Description:** Specifies an array of additional absolute or relative paths to include in the workspace context. This allows you to work with files across multiple directories as if they were one. Paths can use `~` to refer to the user's home directory. This setting can be combined with the `--include-directories` command-line flag.
- **Default:** `[]`
- **Example:**
```json
"includeDirectories": [
"/path/to/another/project",
"../shared-library",
"~/common-utils"
]
```
- **`loadMemoryFromIncludeDirectories`** (boolean):
- **Description:** Controls the behavior of the `/memory refresh` command. If set to `true`, `QWEN.md` files should be loaded from all directories that are added. If set to `false`, `QWEN.md` should only be loaded from the current directory.
- **Default:** `false`
- **Example:**
```json
"loadMemoryFromIncludeDirectories": true
```
- **`tavilyApiKey`** (string):
- **Description:** API key for Tavily web search service. Required to enable the `web_search` tool functionality. If not configured, the web search tool will be disabled and skipped.
- **Default:** `undefined` (web search disabled)
- **Example:** `"tavilyApiKey": "tvly-your-api-key-here"`
- **`chatCompression`** (object):
- **Description:** Controls the settings for chat history compression, both automatic and
when manually invoked through the /compress command.
- **Properties:**
- **`contextPercentageThreshold`** (number): A value between 0 and 1 that specifies the token threshold for compression as a percentage of the model's total token limit. For example, a value of `0.6` will trigger compression when the chat history exceeds 60% of the token limit.
- **Example:**
```json
"chatCompression": {
"contextPercentageThreshold": 0.6
}
```
- **`showLineNumbers`** (boolean):
- **Description:** Controls whether line numbers are displayed in code blocks in the CLI output.
- **Default:** `true`
- **Example:**
```json
"showLineNumbers": false
```
### Example `settings.json`:
```json
@@ -248,6 +300,7 @@ In addition to a project settings file, a project's `.gemini` directory can cont
"sandbox": "docker",
"toolDiscoveryCommand": "bin/get_tools",
"toolCallCommand": "bin/call_tool",
"tavilyApiKey": "$TAVILY_API_KEY",
"mcpServers": {
"mainServer": {
"command": "bin/mcp_server.py"
@@ -271,7 +324,10 @@ In addition to a project settings file, a project's `.gemini` directory can cont
"run_shell_command": {
"tokenBudget": 100
}
}
},
"excludedProjectEnvVars": ["DEBUG", "DEBUG_MODE", "NODE_ENV"],
"includeDirectories": ["path/to/dir1", "~/path/to/dir2", "../path/to/dir3"],
"loadMemoryFromIncludeDirectories": true
}
```
@@ -293,6 +349,8 @@ The CLI automatically loads environment variables from an `.env` file. The loadi
2. If not found, it searches upwards in parent directories until it finds an `.env` file or reaches the project root (identified by a `.git` folder) or the home directory.
3. If still not found, it looks for `~/.env` (in the user's home directory).
**Environment Variable Exclusion:** Some environment variables (like `DEBUG` and `DEBUG_MODE`) are automatically excluded from being loaded from project `.env` files to prevent interference with gemini-cli behavior. Variables from `.gemini/.env` files are never excluded. You can customize this behavior using the `excludedProjectEnvVars` setting in your `settings.json` file.
- **`GEMINI_API_KEY`** (Required):
- Your API key for the Gemini API.
- **Crucial for operation.** The CLI will not function without it.
@@ -332,6 +390,7 @@ The CLI automatically loads environment variables from an `.env` file. The loadi
- `<profile_name>`: Uses a custom profile. To define a custom profile, create a file named `sandbox-macos-<profile_name>.sb` in your project's `.qwen/` directory (e.g., `my-project/.qwen/sandbox-macos-custom.sb`).
- **`DEBUG` or `DEBUG_MODE`** (often used by underlying libraries or the CLI itself):
- Set to `true` or `1` to enable verbose debug logging, which can be helpful for troubleshooting.
- **Note:** These variables are automatically excluded from project `.env` files by default to prevent interference with gemini-cli behavior. Use `.gemini/.env` files if you need to set these for gemini-cli specifically.
- **`NO_COLOR`**:
- Set to any value to disable all color output in the CLI.
- **`CLI_TITLE`**:
@@ -339,6 +398,11 @@ The CLI automatically loads environment variables from an `.env` file. The loadi
- **`CODE_ASSIST_ENDPOINT`**:
- Specifies the endpoint for the code assist server.
- This is useful for development and testing.
- **`TAVILY_API_KEY`**:
- Your API key for the Tavily web search service.
- Required to enable the `web_search` tool functionality.
- If not configured, the web search tool will be disabled and skipped.
- Example: `export TAVILY_API_KEY="tvly-your-api-key-here"`
## Command-Line Arguments
@@ -387,10 +451,18 @@ Arguments passed directly when running the CLI can override other configurations
- **`--proxy`**:
- Sets the proxy for the CLI.
- Example: `--proxy http://localhost:7890`.
- **`--include-directories <dir1,dir2,...>`**:
- Includes additional directories in the workspace for multi-directory support.
- Can be specified multiple times or as comma-separated values.
- 5 directories can be added at maximum.
- Example: `--include-directories /path/to/project1,/path/to/project2` or `--include-directories /path/to/project1 --include-directories /path/to/project2`
- **`--version`**:
- Displays the version of the CLI.
- **`--openai-logging`**:
- Enables logging of OpenAI API calls for debugging and analysis. This flag overrides the `enableOpenAILogging` setting in `settings.json`.
- **`--tavily-api-key <api_key>`**:
- Sets the Tavily API key for web search functionality for this session.
- Example: `gemini --tavily-api-key tvly-your-api-key-here`
## Context Files (Hierarchical Instructional Context)
@@ -398,7 +470,7 @@ While not strictly configuration for the CLI's _behavior_, context files (defaul
- **Purpose:** These Markdown files contain instructions, guidelines, or context that you want the Gemini model to be aware of during your interactions. The system is designed to manage this instructional context hierarchically.
### Example Context File Content (e.g., `GEMINI.md`)
### Example Context File Content (e.g., `QWEN.md`)
Here's a conceptual example of what a context file at the root of a TypeScript project might contain:
@@ -433,9 +505,9 @@ Here's a conceptual example of what a context file at the root of a TypeScript p
This example demonstrates how you can provide general project context, specific coding conventions, and even notes about particular files or components. The more relevant and precise your context files are, the better the AI can assist you. Project-specific context files are highly encouraged to establish conventions and context.
- **Hierarchical Loading and Precedence:** The CLI implements a sophisticated hierarchical memory system by loading context files (e.g., `GEMINI.md`) from several locations. Content from files lower in this list (more specific) typically overrides or supplements content from files higher up (more general). The exact concatenation order and final context can be inspected using the `/memory show` command. The typical loading order is:
- **Hierarchical Loading and Precedence:** The CLI implements a sophisticated hierarchical memory system by loading context files (e.g., `QWEN.md`) from several locations. Content from files lower in this list (more specific) typically overrides or supplements content from files higher up (more general). The exact concatenation order and final context can be inspected using the `/memory show` command. The typical loading order is:
1. **Global Context File:**
- Location: `~/.gemini/<contextFileName>` (e.g., `~/.gemini/GEMINI.md` in your user home directory).
- Location: `~/.qwen/<contextFileName>` (e.g., `~/.qwen/QWEN.md` in your user home directory).
- Scope: Provides default instructions for all your projects.
2. **Project Root & Ancestors Context Files:**
- Location: The CLI searches for the configured context file in the current working directory and then in each parent directory up to either the project root (identified by a `.git` folder) or your home directory.
@@ -444,6 +516,7 @@ This example demonstrates how you can provide general project context, specific
- Location: The CLI also scans for the configured context file in subdirectories _below_ the current working directory (respecting common ignore patterns like `node_modules`, `.git`, etc.). The breadth of this search is limited to 200 directories by default, but can be configured with a `memoryDiscoveryMaxDirs` field in your `settings.json` file.
- Scope: Allows for highly specific instructions relevant to a particular component, module, or subsection of your project.
- **Concatenation & UI Indication:** The contents of all found context files are concatenated (with separators indicating their origin and path) and provided as part of the system prompt to the Gemini model. The CLI footer displays the count of loaded context files, giving you a quick visual cue about the active instructional context.
- **Importing Content:** You can modularize your context files by importing other Markdown files using the `@path/to/file.md` syntax. For more details, see the [Memory Import Processor documentation](../core/memport.md).
- **Commands for Memory Management:**
- Use `/memory refresh` to force a re-scan and reload of all context files from all configured locations. This updates the AI's instructional context.
- Use `/memory show` to display the combined instructional context currently loaded, allowing you to verify the hierarchy and content being used by the AI.
@@ -505,3 +578,5 @@ You can opt out of usage statistics collection at any time by setting the `usage
"usageStatisticsEnabled": false
}
```
Note: When usage statistics are enabled, events are sent to an Alibaba Cloud RUM collection endpoint.

View File

@@ -1,28 +1,28 @@
# Gemini CLI
# Qwen Code CLI
Within Gemini CLI, `packages/cli` is the frontend for users to send and receive prompts with the Gemini AI model and its associated tools. For a general overview of Gemini CLI, see the [main documentation page](../index.md).
Within Qwen Code, `packages/cli` is the frontend for users to send and receive prompts with Qwen and other AI models and their associated tools. For a general overview of Qwen Code, see the [main documentation page](../index.md).
## Navigating this section
- **[Authentication](./authentication.md):** A guide to setting up authentication with Google's AI services.
- **[Commands](./commands.md):** A reference for Gemini CLI commands (e.g., `/help`, `/tools`, `/theme`).
- **[Configuration](./configuration.md):** A guide to tailoring Gemini CLI behavior using configuration files.
- **[Authentication](./authentication.md):** A guide to setting up authentication with Qwen OAuth and OpenAI-compatible providers.
- **[Commands](./commands.md):** A reference for Qwen Code CLI commands (e.g., `/help`, `/tools`, `/theme`).
- **[Configuration](./configuration.md):** A guide to tailoring Qwen Code CLI behavior using configuration files.
- **[Token Caching](./token-caching.md):** Optimize API costs through token caching.
- **[Themes](./themes.md)**: A guide to customizing the CLI's appearance with different themes.
- **[Tutorials](tutorials.md)**: A tutorial showing how to use Gemini CLI to automate a development task.
- **[Tutorials](tutorials.md)**: A tutorial showing how to use Qwen Code to automate a development task.
## Non-interactive mode
Gemini CLI can be run in a non-interactive mode, which is useful for scripting and automation. In this mode, you pipe input to the CLI, it executes the command, and then it exits.
Qwen Code can be run in a non-interactive mode, which is useful for scripting and automation. In this mode, you pipe input to the CLI, it executes the command, and then it exits.
The following example pipes a command to Gemini CLI from your terminal:
The following example pipes a command to Qwen Code from your terminal:
```bash
echo "What is fine tuning?" | gemini
echo "What is fine tuning?" | qwen
```
Gemini CLI executes the command and prints the output to your terminal. Note that you can achieve the same behavior by using the `--prompt` or `-p` flag. For example:
Qwen Code executes the command and prints the output to your terminal. Note that you can achieve the same behavior by using the `--prompt` or `-p` flag. For example:
```bash
gemini -p "What is fine tuning?"
qwen -p "What is fine tuning?"
```

View File

@@ -58,7 +58,11 @@ Add a `customThemes` block to your user, project, or system `settings.json` file
"AccentYellow": "#E5C07B",
"AccentRed": "#E06C75",
"Comment": "#5C6370",
"Gray": "#ABB2BF"
"Gray": "#ABB2BF",
"DiffAdded": "#A6E3A1",
"DiffRemoved": "#F38BA8",
"DiffModified": "#89B4FA",
"GradientColors": ["#4796E4", "#847ACE", "#C3677F"]
}
}
}
@@ -77,6 +81,9 @@ Add a `customThemes` block to your user, project, or system `settings.json` file
- `AccentRed`
- `Comment`
- `Gray`
- `DiffAdded` (optional, for added lines in diffs)
- `DiffRemoved` (optional, for removed lines in diffs)
- `DiffModified` (optional, for modified lines in diffs)
**Required Properties:**

View File

@@ -5,14 +5,14 @@ Gemini CLI's core package (`packages/core`) is the backend portion of Gemini CLI
## Navigating this section
- **[Core tools API](./tools-api.md):** Information on how tools are defined, registered, and used by the core.
- **[Memory Import Processor](./memport.md):** Documentation for the modular GEMINI.md import feature using @file.md syntax.
- **[Memory Import Processor](./memport.md):** Documentation for the modular QWEN.md import feature using @file.md syntax.
## Role of the core
While the `packages/cli` portion of Gemini CLI provides the user interface, `packages/core` is responsible for:
- **Gemini API interaction:** Securely communicating with the Google Gemini API, sending user prompts, and receiving model responses.
- **Prompt engineering:** Constructing effective prompts for the Gemini model, potentially incorporating conversation history, tool definitions, and instructional context from `GEMINI.md` files.
- **Prompt engineering:** Constructing effective prompts for the model, potentially incorporating conversation history, tool definitions, and instructional context from context files (e.g., `QWEN.md`).
- **Tool management & orchestration:**
- Registering available tools (e.g., file system tools, shell command execution).
- Interpreting tool use requests from the Gemini model.
@@ -48,8 +48,8 @@ The file discovery service is responsible for finding files in the project that
## Memory discovery service
The memory discovery service is responsible for finding and loading the `GEMINI.md` files that provide context to the model. It searches for these files in a hierarchical manner, starting from the current working directory and moving up to the project root and the user's home directory. It also searches in subdirectories.
The memory discovery service is responsible for finding and loading the context files (default: `QWEN.md`) that provide context to the model. It searches for these files in a hierarchical manner, starting from the current working directory and moving up to the project root and the user's home directory. It also searches in subdirectories.
This allows you to have global, project-level, and component-level context files, which are all combined to provide the model with the most relevant information.
You can use the [`/memory` command](../cli/commands.md) to `show`, `add`, and `refresh` the content of loaded `GEMINI.md` files.
You can use the [`/memory` command](../cli/commands.md) to `show`, `add`, and `refresh` the content of loaded context files.

View File

@@ -1,21 +1,17 @@
# Memory Import Processor
The Memory Import Processor is a feature that allows you to modularize your GEMINI.md files by importing content from other markdown files using the `@file.md` syntax.
The Memory Import Processor is a feature that allows you to modularize your context files (e.g., `QWEN.md`) by importing content from other files using the `@file.md` syntax.
## Overview
This feature enables you to break down large GEMINI.md files into smaller, more manageable components that can be reused across different contexts. The import processor supports both relative and absolute paths, with built-in safety features to prevent circular imports and ensure file access security.
## Important Limitations
**This feature only supports `.md` (markdown) files.** Attempting to import files with other extensions (like `.txt`, `.json`, etc.) will result in a warning and the import will fail.
This feature enables you to break down large context files (e.g., `QWEN.md`) into smaller, more manageable components that can be reused across different contexts. The import processor supports both relative and absolute paths, with built-in safety features to prevent circular imports and ensure file access security.
## Syntax
Use the `@` symbol followed by the path to the markdown file you want to import:
Use the `@` symbol followed by the path to the file you want to import:
```markdown
# Main GEMINI.md file
# Main QWEN.md file
This is the main content.
@@ -43,7 +39,7 @@ More content here.
### Basic Import
```markdown
# My GEMINI.md
# My QWEN.md
Welcome to my project!
@@ -96,24 +92,10 @@ The `validateImportPath` function ensures that imports are only allowed from spe
### Maximum Import Depth
To prevent infinite recursion, there's a configurable maximum import depth (default: 10 levels).
To prevent infinite recursion, there's a configurable maximum import depth (default: 5 levels).
## Error Handling
### Non-MD File Attempts
If you try to import a non-markdown file, you'll see a warning:
```markdown
@./instructions.txt <!-- This will show a warning and fail -->
```
Console output:
```
[WARN] [ImportProcessor] Import processor only supports .md files. Attempting to import non-md file: ./instructions.txt. This will fail.
```
### Missing Files
If a referenced file doesn't exist, the import will fail gracefully with an error comment in the output.
@@ -122,11 +104,41 @@ If a referenced file doesn't exist, the import will fail gracefully with an erro
Permission issues or other file system errors are handled gracefully with appropriate error messages.
## Code Region Detection
The import processor uses the `marked` library to detect code blocks and inline code spans, ensuring that `@` imports inside these regions are properly ignored. This provides robust handling of nested code blocks and complex Markdown structures.
## Import Tree Structure
The processor returns an import tree that shows the hierarchy of imported files. This helps users debug problems with their context files by showing which files were read and their import relationships.
Example tree structure:
```
Memory Files
L project: QWEN.md
L a.md
L b.md
L c.md
L d.md
L e.md
L f.md
L included.md
```
The tree preserves the order that files were imported and shows the complete import chain for debugging purposes.
## Comparison to Claude Code's `/memory` (`claude.md`) Approach
Claude Code's `/memory` feature (as seen in `claude.md`) produces a flat, linear document by concatenating all included files, always marking file boundaries with clear comments and path names. It does not explicitly present the import hierarchy, but the LLM receives all file contents and paths, which is sufficient for reconstructing the hierarchy if needed.
Note: The import tree is mainly for clarity during development and has limited relevance to LLM consumption.
## API Reference
### `processImports(content, basePath, debugMode?, importState?)`
Processes import statements in GEMINI.md content.
Processes import statements in context file content.
**Parameters:**
@@ -135,7 +147,25 @@ Processes import statements in GEMINI.md content.
- `debugMode` (boolean, optional): Whether to enable debug logging (default: false)
- `importState` (ImportState, optional): State tracking for circular import prevention
**Returns:** Promise<string> - Processed content with imports resolved
**Returns:** Promise<ProcessImportsResult> - Object containing processed content and import tree
### `ProcessImportsResult`
```typescript
interface ProcessImportsResult {
content: string; // The processed content with imports resolved
importTree: MemoryFile; // Tree structure showing the import hierarchy
}
```
### `MemoryFile`
```typescript
interface MemoryFile {
path: string; // The file path
imports?: MemoryFile[]; // Direct imports, in the order they were imported
}
```
### `validateImportPath(importPath, basePath, allowedDirectories)`
@@ -149,6 +179,16 @@ Validates import paths to ensure they are safe and within allowed directories.
**Returns:** boolean - Whether the import path is valid
### `findProjectRoot(startDir)`
Finds the project root by searching for a `.git` directory upwards from the given start directory. Implemented as an **async** function using non-blocking file system APIs to avoid blocking the Node.js event loop.
**Parameters:**
- `startDir` (string): The directory to start searching from
**Returns:** Promise<string> - The project root directory (or the start directory if no `.git` is found)
## Best Practices
1. **Use descriptive file names** for imported components
@@ -161,7 +201,7 @@ Validates import paths to ensure they are safe and within allowed directories.
### Common Issues
1. **Import not working**: Check that the file exists and has a `.md` extension
1. **Import not working**: Check that the file exists and the path is correct
2. **Circular import warnings**: Review your import structure for circular references
3. **Permission errors**: Ensure the files are readable and within allowed directories
4. **Path resolution issues**: Use absolute paths if relative paths aren't resolving correctly

View File

@@ -15,9 +15,11 @@ The Gemini CLI core (`packages/core`) features a robust system for defining, reg
- `execute()`: The core method that performs the tool's action and returns a `ToolResult`.
- **`ToolResult` (`tools.ts`):** An interface defining the structure of a tool's execution outcome:
- `llmContent`: The factual string content to be included in the history sent back to the LLM for context.
- `llmContent`: The factual content to be included in the history sent back to the LLM for context. This can be a simple string or a `PartListUnion` (an array of `Part` objects and strings) for rich content.
- `returnDisplay`: A user-friendly string (often Markdown) or a special object (like `FileDiff`) for display in the CLI.
- **Returning Rich Content:** Tools are not limited to returning simple text. The `llmContent` can be a `PartListUnion`, which is an array that can contain a mix of `Part` objects (for images, audio, etc.) and `string`s. This allows a single tool execution to return multiple pieces of rich content.
- **Tool Registry (`tool-registry.ts`):** A class (`ToolRegistry`) responsible for:
- **Registering Tools:** Holding a collection of all available built-in tools (e.g., `ReadFileTool`, `ShellTool`).
- **Discovering Tools:** It can also discover tools dynamically:

View File

@@ -1,23 +1,23 @@
# Gemini CLI Extensions
# Qwen Code Extensions
Gemini CLI supports extensions that can be used to configure and extend its functionality.
Qwen Code supports extensions that can be used to configure and extend its functionality.
## How it works
On startup, Gemini CLI looks for extensions in two locations:
On startup, Qwen Code looks for extensions in two locations:
1. `<workspace>/.gemini/extensions`
2. `<home>/.gemini/extensions`
1. `<workspace>/.qwen/extensions`
2. `<home>/.qwen/extensions`
Gemini CLI loads all extensions from both locations. If an extension with the same name exists in both locations, the extension in the workspace directory takes precedence.
Qwen Code loads all extensions from both locations. If an extension with the same name exists in both locations, the extension in the workspace directory takes precedence.
Within each location, individual extensions exist as a directory that contains a `gemini-extension.json` file. For example:
Within each location, individual extensions exist as a directory that contains a `qwen-extension.json` file. For example:
`<workspace>/.gemini/extensions/my-extension/gemini-extension.json`
`<workspace>/.qwen/extensions/my-extension/qwen-extension.json`
### `gemini-extension.json`
### `qwen-extension.json`
The `gemini-extension.json` file contains the configuration for the extension. The file has the following structure:
The `qwen-extension.json` file contains the configuration for the extension. The file has the following structure:
```json
{
@@ -28,15 +28,49 @@ The `gemini-extension.json` file contains the configuration for the extension. T
"command": "node my-server.js"
}
},
"contextFileName": "GEMINI.md",
"contextFileName": "QWEN.md",
"excludeTools": ["run_shell_command"]
}
```
- `name`: The name of the extension. This is used to uniquely identify the extension. This should match the name of your extension directory.
- `name`: The name of the extension. This is used to uniquely identify the extension and for conflict resolution when extension commands have the same name as user or project commands.
- `version`: The version of the extension.
- `mcpServers`: A map of MCP servers to configure. The key is the name of the server, and the value is the server configuration. These servers will be loaded on startup just like MCP servers configured in a [`settings.json` file](./cli/configuration.md). If both an extension and a `settings.json` file configure an MCP server with the same name, the server defined in the `settings.json` file takes precedence.
- `contextFileName`: The name of the file that contains the context for the extension. This will be used to load the context from the workspace. If this property is not used but a `GEMINI.md` file is present in your extension directory, then that file will be loaded.
- `contextFileName`: The name of the file that contains the context for the extension. This will be used to load the context from the workspace. If this property is not used but a `QWEN.md` file is present in your extension directory, then that file will be loaded.
- `excludeTools`: An array of tool names to exclude from the model. You can also specify command-specific restrictions for tools that support it, like the `run_shell_command` tool. For example, `"excludeTools": ["run_shell_command(rm -rf)"]` will block the `rm -rf` command.
When Gemini CLI starts, it loads all the extensions and merges their configurations. If there are any conflicts, the workspace configuration takes precedence.
When Qwen Code starts, it loads all the extensions and merges their configurations. If there are any conflicts, the workspace configuration takes precedence.
## Extension Commands
Extensions can provide [custom commands](./cli/commands.md#custom-commands) by placing TOML files in a `commands/` subdirectory within the extension directory. These commands follow the same format as user and project custom commands and use standard naming conventions.
### Example
An extension named `gcp` with the following structure:
```
.qwen/extensions/gcp/
├── qwen-extension.json
└── commands/
├── deploy.toml
└── gcs/
└── sync.toml
```
Would provide these commands:
- `/deploy` - Shows as `[gcp] Custom command from deploy.toml` in help
- `/gcs:sync` - Shows as `[gcp] Custom command from sync.toml` in help
### Conflict Resolution
Extension commands have the lowest precedence. When a conflict occurs with user or project commands:
1. **No conflict**: Extension command uses its natural name (e.g., `/deploy`)
2. **With conflict**: Extension command is renamed with the extension prefix (e.g., `/gcp.deploy`)
For example, if both a user and the `gcp` extension define a `deploy` command:
- `/deploy` - Executes the user's deploy command
- `/gcp.deploy` - Executes the extension's deploy command (marked with `[gcp]` tag)

59
docs/gemini-ignore.md Normal file
View File

@@ -0,0 +1,59 @@
# Ignoring Files
This document provides an overview of the Gemini Ignore (`.geminiignore`) feature of the Gemini CLI.
The Gemini CLI includes the ability to automatically ignore files, similar to `.gitignore` (used by Git) and `.aiexclude` (used by Gemini Code Assist). Adding paths to your `.geminiignore` file will exclude them from tools that support this feature, although they will still be visible to other services (such as Git).
## How it works
When you add a path to your `.geminiignore` file, tools that respect this file will exclude matching files and directories from their operations. For example, when you use the [`read_many_files`](./tools/multi-file.md) command, any paths in your `.geminiignore` file will be automatically excluded.
For the most part, `.geminiignore` follows the conventions of `.gitignore` files:
- Blank lines and lines starting with `#` are ignored.
- Standard glob patterns are supported (such as `*`, `?`, and `[]`).
- Putting a `/` at the end will only match directories.
- Putting a `/` at the beginning anchors the path relative to the `.geminiignore` file.
- `!` negates a pattern.
You can update your `.geminiignore` file at any time. To apply the changes, you must restart your Gemini CLI session.
## How to use `.geminiignore`
To enable `.geminiignore`:
1. Create a file named `.geminiignore` in the root of your project directory.
To add a file or directory to `.geminiignore`:
1. Open your `.geminiignore` file.
2. Add the path or file you want to ignore, for example: `/archive/` or `apikeys.txt`.
### `.geminiignore` examples
You can use `.geminiignore` to ignore directories and files:
```
# Exclude your /packages/ directory and all subdirectories
/packages/
# Exclude your apikeys.txt file
apikeys.txt
```
You can use wildcards in your `.geminiignore` file with `*`:
```
# Exclude all .md files
*.md
```
Finally, you can exclude files and directories from exclusion with `!`:
```
# Exclude all .md files except README.md
*.md
!README.md
```
To remove paths from your `.geminiignore` file, delete the relevant lines.

View File

@@ -28,7 +28,7 @@ This documentation is organized into the following sections:
- **[Multi-File Read Tool](./tools/multi-file.md):** Documentation for the `read_many_files` tool.
- **[Shell Tool](./tools/shell.md):** Documentation for the `run_shell_command` tool.
- **[Web Fetch Tool](./tools/web-fetch.md):** Documentation for the `web_fetch` tool.
- **[Web Search Tool](./tools/web-search.md):** Documentation for the `google_web_search` tool.
- **[Web Search Tool](./tools/web-search.md):** Documentation for the `web_search` tool.
- **[Memory Tool](./tools/memory.md):** Documentation for the `save_memory` tool.
- **[Contributing & Development Guide](../CONTRIBUTING.md):** Information for contributors and developers, including setup, building, testing, and coding conventions.
- **[NPM Workspaces and Publishing](./npm.md):** Details on how the project's packages are managed and published.

View File

@@ -109,10 +109,10 @@ To check for linting errors, run the following command:
npm run lint
```
You can include the `--fix` flag in the command to automatically fix any fixable linting errors:
You can include the `:fix` flag in the command to automatically fix any fixable linting errors:
```bash
npm run lint --fix
npm run lint:fix
```
## Directory structure

View File

@@ -0,0 +1,84 @@
# Automation and Triage Processes
This document provides a detailed overview of the automated processes we use to manage and triage issues and pull requests. Our goal is to provide prompt feedback and ensure that contributions are reviewed and integrated efficiently. Understanding this automation will help you as a contributor know what to expect and how to best interact with our repository bots.
## Guiding Principle: Issues and Pull Requests
First and foremost, almost every Pull Request (PR) should be linked to a corresponding Issue. The issue describes the "what" and the "why" (the bug or feature), while the PR is the "how" (the implementation). This separation helps us track work, prioritize features, and maintain clear historical context. Our automation is built around this principle.
---
## Detailed Automation Workflows
Here is a breakdown of the specific automation workflows that run in our repository.
### 1. When you open an Issue: `Automated Issue Triage`
This is the first bot you will interact with when you create an issue. Its job is to perform an initial analysis and apply the correct labels.
- **Workflow File**: `.github/workflows/gemini-automated-issue-triage.yml`
- **When it runs**: Immediately after an issue is created or reopened.
- **What it does**:
- It uses a Gemini model to analyze the issue's title and body against a detailed set of guidelines.
- **Applies one `area/*` label**: Categorizes the issue into a functional area of the project (e.g., `area/ux`, `area/models`, `area/platform`).
- **Applies one `kind/*` label**: Identifies the type of issue (e.g., `kind/bug`, `kind/enhancement`, `kind/question`).
- **Applies one `priority/*` label**: Assigns a priority from P0 (critical) to P3 (low) based on the described impact.
- **May apply `status/need-information`**: If the issue lacks critical details (like logs or reproduction steps), it will be flagged for more information.
- **May apply `status/need-retesting`**: If the issue references a CLI version that is more than six versions old, it will be flagged for retesting on a current version.
- **What you should do**:
- Fill out the issue template as completely as possible. The more detail you provide, the more accurate the triage will be.
- If the `status/need-information` label is added, please provide the requested details in a comment.
### 2. When you open a Pull Request: `Continuous Integration (CI)`
This workflow ensures that all changes meet our quality standards before they can be merged.
- **Workflow File**: `.github/workflows/ci.yml`
- **When it runs**: On every push to a pull request.
- **What it does**:
- **Lint**: Checks that your code adheres to our project's formatting and style rules.
- **Test**: Runs our full suite of automated tests across macOS, Windows, and Linux, and on multiple Node.js versions. This is the most time-consuming part of the CI process.
- **Post Coverage Comment**: After all tests have successfully passed, a bot will post a comment on your PR. This comment provides a summary of how well your changes are covered by tests.
- **What you should do**:
- Ensure all CI checks pass. A green checkmark ✅ will appear next to your commit when everything is successful.
- If a check fails (a red "X" ❌), click the "Details" link next to the failed check to view the logs, identify the problem, and push a fix.
### 3. Ongoing Triage for Pull Requests: `PR Auditing and Label Sync`
This workflow runs periodically to ensure all open PRs are correctly linked to issues and have consistent labels.
- **Workflow File**: `.github/workflows/gemini-scheduled-pr-triage.yml`
- **When it runs**: Every 15 minutes on all open pull requests.
- **What it does**:
- **Checks for a linked issue**: The bot scans your PR description for a keyword that links it to an issue (e.g., `Fixes #123`, `Closes #456`).
- **Adds `status/need-issue`**: If no linked issue is found, the bot will add the `status/need-issue` label to your PR. This is a clear signal that an issue needs to be created and linked.
- **Synchronizes labels**: If an issue _is_ linked, the bot ensures the PR's labels perfectly match the issue's labels. It will add any missing labels and remove any that don't belong, and it will remove the `status/need-issue` label if it was present.
- **What you should do**:
- **Always link your PR to an issue.** This is the most important step. Add a line like `Resolves #<issue-number>` to your PR description.
- This will ensure your PR is correctly categorized and moves through the review process smoothly.
### 4. Ongoing Triage for Issues: `Scheduled Issue Triage`
This is a fallback workflow to ensure that no issue gets missed by the triage process.
- **Workflow File**: `.github/workflows/gemini-scheduled-issue-triage.yml`
- **When it runs**: Every hour on all open issues.
- **What it does**:
- It actively seeks out issues that either have no labels at all or still have the `status/need-triage` label.
- It then triggers the same powerful Gemini-based analysis as the initial triage bot to apply the correct labels.
- **What you should do**:
- You typically don't need to do anything. This workflow is a safety net to ensure every issue is eventually categorized, even if the initial triage fails.
### 5. Release Automation
This workflow handles the process of packaging and publishing new versions of the Gemini CLI.
- **Workflow File**: `.github/workflows/release.yml`
- **When it runs**: On a daily schedule for "nightly" releases, and manually for official patch/minor releases.
- **What it does**:
- Automatically builds the project, bumps the version numbers, and publishes the packages to npm.
- Creates a corresponding release on GitHub with generated release notes.
- **What you should do**:
- As a contributor, you don't need to do anything for this process. You can be confident that once your PR is merged into the `main` branch, your changes will be included in the very next nightly release.
We hope this detailed overview is helpful. If you have any questions about our automation or processes, please don't hesitate to ask!

View File

@@ -0,0 +1,62 @@
# Gemini CLI Keyboard Shortcuts
This document lists the available keyboard shortcuts in the Gemini CLI.
## General
| Shortcut | Description |
| -------- | --------------------------------------------------------------------------------------------------------------------- |
| `Esc` | Close dialogs and suggestions. |
| `Ctrl+C` | Exit the application. Press twice to confirm. |
| `Ctrl+D` | Exit the application if the input is empty. Press twice to confirm. |
| `Ctrl+L` | Clear the screen. |
| `Ctrl+O` | Toggle the display of the debug console. |
| `Ctrl+S` | Allows long responses to print fully, disabling truncation. Use your terminal's scrollback to view the entire output. |
| `Ctrl+T` | Toggle the display of tool descriptions. |
| `Ctrl+Y` | Toggle auto-approval (YOLO mode) for all tool calls. |
## Input Prompt
| Shortcut | Description |
| -------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
| `!` | Toggle shell mode when the input is empty. |
| `\` (at end of line) + `Enter` | Insert a newline. |
| `Down Arrow` | Navigate down through the input history. |
| `Enter` | Submit the current prompt. |
| `Meta+Delete` / `Ctrl+Delete` | Delete the word to the right of the cursor. |
| `Tab` | Autocomplete the current suggestion if one exists. |
| `Up Arrow` | Navigate up through the input history. |
| `Ctrl+A` / `Home` | Move the cursor to the beginning of the line. |
| `Ctrl+B` / `Left Arrow` | Move the cursor one character to the left. |
| `Ctrl+C` | Clear the input prompt |
| `Ctrl+D` / `Delete` | Delete the character to the right of the cursor. |
| `Ctrl+E` / `End` | Move the cursor to the end of the line. |
| `Ctrl+F` / `Right Arrow` | Move the cursor one character to the right. |
| `Ctrl+H` / `Backspace` | Delete the character to the left of the cursor. |
| `Ctrl+K` | Delete from the cursor to the end of the line. |
| `Ctrl+Left Arrow` / `Meta+Left Arrow` / `Meta+B` | Move the cursor one word to the left. |
| `Ctrl+N` | Navigate down through the input history. |
| `Ctrl+P` | Navigate up through the input history. |
| `Ctrl+Right Arrow` / `Meta+Right Arrow` / `Meta+F` | Move the cursor one word to the right. |
| `Ctrl+U` | Delete from the cursor to the beginning of the line. |
| `Ctrl+V` | Paste clipboard content. If the clipboard contains an image, it will be saved and a reference to it will be inserted in the prompt. |
| `Ctrl+W` / `Meta+Backspace` / `Ctrl+Backspace` | Delete the word to the left of the cursor. |
| `Ctrl+X` / `Meta+Enter` | Open the current input in an external editor. |
## Suggestions
| Shortcut | Description |
| --------------- | -------------------------------------- |
| `Down Arrow` | Navigate down through the suggestions. |
| `Tab` / `Enter` | Accept the selected suggestion. |
| `Up Arrow` | Navigate up through the suggestions. |
## Radio Button Select
| Shortcut | Description |
| ------------------ | ------------------------------------------------------------------------------------------------------------- |
| `Down Arrow` / `j` | Move selection down. |
| `Enter` | Confirm selection. |
| `Up Arrow` / `k` | Move selection up. |
| `1-9` | Select an item by its number. |
| (multi-digit) | For items with numbers greater than 9, press the digits in quick succession to select the corresponding item. |

View File

@@ -77,6 +77,24 @@ Built-in profiles (set via `SEATBELT_PROFILE` env var):
- `restrictive-open`: Strict restrictions, network allowed
- `restrictive-closed`: Maximum restrictions
### Custom Sandbox Flags
For container-based sandboxing, you can inject custom flags into the `docker` or `podman` command using the `SANDBOX_FLAGS` environment variable. This is useful for advanced configurations, such as disabling security features for specific use cases.
**Example (Podman)**:
To disable SELinux labeling for volume mounts, you can set the following:
```bash
export SANDBOX_FLAGS="--security-opt label=disable"
```
Multiple flags can be provided as a space-separated string:
```bash
export SANDBOX_FLAGS="--flag1 --flag2=value"
```
## Linux UID/GID handling
The sandbox automatically handles user permissions on Linux. Override these permissions with:
@@ -111,6 +129,8 @@ export SANDBOX_SET_UID_GID=false # Disable UID/GID mapping
DEBUG=1 gemini -s -p "debug command"
```
**Note:** If you have `DEBUG=true` in a project's `.env` file, it won't affect gemini-cli due to automatic exclusion. Use `.gemini/.env` files for gemini-cli specific debug settings.
### Inspect sandbox
```bash

View File

@@ -58,7 +58,17 @@ You can export all telemetry data to a file for local inspection.
To enable file export, use the `--telemetry-outfile` flag with a path to your desired output file. This must be run using `--telemetry-target=local`.
```bash
gemini --telemetry --telemetry-target=local --telemetry-outfile=/path/to/telemetry.log "your prompt"
# Set your desired output file path
TELEMETRY_FILE=".gemini/telemetry.log"
# Run Gemini CLI with local telemetry
# NOTE: --telemetry-otlp-endpoint="" is required to override the default
# OTLP exporter and ensure telemetry is written to the local file.
gemini --telemetry \
--telemetry-target=local \
--telemetry-otlp-endpoint="" \
--telemetry-outfile="$TELEMETRY_FILE" \
--prompt "What is OpenTelemetry?"
```
## Running an OTEL Collector
@@ -173,9 +183,10 @@ Logs are timestamped records of specific events. The following events are logged
- `function_args`
- `duration_ms`
- `success` (boolean)
- `decision` (string: "accept", "reject", or "modify", if applicable)
- `decision` (string: "accept", "reject", "auto_accept", or "modify", if applicable)
- `error` (if applicable)
- `error_type` (if applicable)
- `metadata` (if applicable, dictionary of string -> any)
- `gemini_cli.api_request`: This event occurs when making a request to Gemini API.
- **Attributes**:
@@ -209,6 +220,11 @@ Logs are timestamped records of specific events. The following events are logged
- **Attributes**:
- `auth_type`
- `gemini_cli.slash_command`: This event occurs when a user executes a slash command.
- **Attributes**:
- `command` (string)
- `subcommand` (string, if applicable)
### Metrics
Metrics are numerical measurements of behavior over time. The following metrics are collected for Gemini CLI:
@@ -247,3 +263,7 @@ Metrics are numerical measurements of behavior over time. The following metrics
- `lines` (Int, if applicable): Number of lines in the file.
- `mimetype` (string, if applicable): Mimetype of the file.
- `extension` (string, if applicable): File extension of the file.
- `ai_added_lines` (Int, if applicable): Number of lines added/changed by AI.
- `ai_removed_lines` (Int, if applicable): Number of lines removed/changed by AI.
- `user_added_lines` (Int, if applicable): Number of lines added/changed by user in AI proposed changes.
- `user_removed_lines` (Int, if applicable): Number of lines removed/changed by user in AI proposed changes.

View File

@@ -169,6 +169,7 @@ Use the `/mcp auth` command to manage OAuth authentication:
- **`scopes`** (string[]): Required OAuth scopes
- **`redirectUri`** (string): Custom redirect URI (defaults to `http://localhost:7777/oauth/callback`)
- **`tokenParamName`** (string): Query parameter name for tokens in SSE URLs
- **`audiences`** (string[]): Audiences the token is valid for
#### Token Management
@@ -570,3 +571,231 @@ The MCP integration tracks several states:
- **Conflict resolution:** Tool name conflicts between servers are resolved through automatic prefixing
This comprehensive integration makes MCP servers a powerful way to extend the Gemini CLI's capabilities while maintaining security, reliability, and ease of use.
## Returning Rich Content from Tools
MCP tools are not limited to returning simple text. You can return rich, multi-part content, including text, images, audio, and other binary data in a single tool response. This allows you to build powerful tools that can provide diverse information to the model in a single turn.
All data returned from the tool is processed and sent to the model as context for its next generation, enabling it to reason about or summarize the provided information.
### How It Works
To return rich content, your tool's response must adhere to the MCP specification for a [`CallToolResult`](https://modelcontextprotocol.io/specification/2025-06-18/server/tools#tool-result). The `content` field of the result should be an array of `ContentBlock` objects. The Gemini CLI will correctly process this array, separating text from binary data and packaging it for the model.
You can mix and match different content block types in the `content` array. The supported block types include:
- `text`
- `image`
- `audio`
- `resource` (embedded content)
- `resource_link`
### Example: Returning Text and an Image
Here is an example of a valid JSON response from an MCP tool that returns both a text description and an image:
```json
{
"content": [
{
"type": "text",
"text": "Here is the logo you requested."
},
{
"type": "image",
"data": "BASE64_ENCODED_IMAGE_DATA_HERE",
"mimeType": "image/png"
},
{
"type": "text",
"text": "The logo was created in 2025."
}
]
}
```
When the Gemini CLI receives this response, it will:
1. Extract all the text and combine it into a single `functionResponse` part for the model.
2. Present the image data as a separate `inlineData` part.
3. Provide a clean, user-friendly summary in the CLI, indicating that both text and an image were received.
This enables you to build sophisticated tools that can provide rich, multi-modal context to the Gemini model.
## MCP Prompts as Slash Commands
In addition to tools, MCP servers can expose predefined prompts that can be executed as slash commands within the Gemini CLI. This allows you to create shortcuts for common or complex queries that can be easily invoked by name.
### Defining Prompts on the Server
Here's a small example of a stdio MCP server that defines prompts:
```ts
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import { z } from 'zod';
const server = new McpServer({
name: 'prompt-server',
version: '1.0.0',
});
server.registerPrompt(
'poem-writer',
{
title: 'Poem Writer',
description: 'Write a nice haiku',
argsSchema: { title: z.string(), mood: z.string().optional() },
},
({ title, mood }) => ({
messages: [
{
role: 'user',
content: {
type: 'text',
text: `Write a haiku${mood ? ` with the mood ${mood}` : ''} called ${title}. Note that a haiku is 5 syllables followed by 7 syllables followed by 5 syllables `,
},
},
],
}),
);
const transport = new StdioServerTransport();
await server.connect(transport);
```
This can be included in `settings.json` under `mcpServers` with:
```json
"nodeServer": {
"command": "node",
"args": ["filename.ts"],
}
```
### Invoking Prompts
Once a prompt is discovered, you can invoke it using its name as a slash command. The CLI will automatically handle parsing arguments.
```bash
/poem-writer --title="Gemini CLI" --mood="reverent"
```
or, using positional arguments:
```bash
/poem-writer "Gemini CLI" reverent
```
When you run this command, the Gemini CLI executes the `prompts/get` method on the MCP server with the provided arguments. The server is responsible for substituting the arguments into the prompt template and returning the final prompt text. The CLI then sends this prompt to the model for execution. This provides a convenient way to automate and share common workflows.
## Managing MCP Servers with `gemini mcp`
While you can always configure MCP servers by manually editing your `settings.json` file, the Gemini CLI provides a convenient set of commands to manage your server configurations programmatically. These commands streamline the process of adding, listing, and removing MCP servers without needing to directly edit JSON files.
### Adding a Server (`gemini mcp add`)
The `add` command configures a new MCP server in your `settings.json`. Based on the scope (`-s, --scope`), it will be added to either the user config `~/.gemini/settings.json` or the project config `.gemini/settings.json` file.
**Command:**
```bash
gemini mcp add [options] <name> <commandOrUrl> [args...]
```
- `<name>`: A unique name for the server.
- `<commandOrUrl>`: The command to execute (for `stdio`) or the URL (for `http`/`sse`).
- `[args...]`: Optional arguments for a `stdio` command.
**Options (Flags):**
- `-s, --scope`: Configuration scope (user or project). [default: "project"]
- `-t, --transport`: Transport type (stdio, sse, http). [default: "stdio"]
- `-e, --env`: Set environment variables (e.g. -e KEY=value).
- `-H, --header`: Set HTTP headers for SSE and HTTP transports (e.g. -H "X-Api-Key: abc123" -H "Authorization: Bearer abc123").
- `--timeout`: Set connection timeout in milliseconds.
- `--trust`: Trust the server (bypass all tool call confirmation prompts).
- `--description`: Set the description for the server.
- `--include-tools`: A comma-separated list of tools to include.
- `--exclude-tools`: A comma-separated list of tools to exclude.
#### Adding an stdio server
This is the default transport for running local servers.
```bash
# Basic syntax
gemini mcp add <name> <command> [args...]
# Example: Adding a local server
gemini mcp add my-stdio-server -e API_KEY=123 /path/to/server arg1 arg2 arg3
# Example: Adding a local python server
gemini mcp add python-server python server.py --port 8080
```
#### Adding an HTTP server
This transport is for servers that use the streamable HTTP transport.
```bash
# Basic syntax
gemini mcp add --transport http <name> <url>
# Example: Adding an HTTP server
gemini mcp add --transport http http-server https://api.example.com/mcp/
# Example: Adding an HTTP server with an authentication header
gemini mcp add --transport http secure-http https://api.example.com/mcp/ --header "Authorization: Bearer abc123"
```
#### Adding an SSE server
This transport is for servers that use Server-Sent Events (SSE).
```bash
# Basic syntax
gemini mcp add --transport sse <name> <url>
# Example: Adding an SSE server
gemini mcp add --transport sse sse-server https://api.example.com/sse/
# Example: Adding an SSE server with an authentication header
gemini mcp add --transport sse secure-sse https://api.example.com/sse/ --header "Authorization: Bearer abc123"
```
### Listing Servers (`gemini mcp list`)
To view all MCP servers currently configured, use the `list` command. It displays each server's name, configuration details, and connection status.
**Command:**
```bash
gemini mcp list
```
**Example Output:**
```sh
✓ stdio-server: command: python3 server.py (stdio) - Connected
✓ http-server: https://api.example.com/mcp (http) - Connected
✗ sse-server: https://api.example.com/sse (sse) - Disconnected
```
### Removing a Server (`gemini mcp remove`)
To delete a server from your configuration, use the `remove` command with the server's name.
**Command:**
```bash
gemini mcp remove <name>
```
**Example:**
```bash
gemini mcp remove my-server
```
This will find and delete the "my-server" entry from the `mcpServers` object in the appropriate `settings.json` file based on the scope (`-s, --scope`).

View File

@@ -4,7 +4,7 @@ This document describes the `save_memory` tool for the Gemini CLI.
## Description
Use `save_memory` to save and recall information across your Gemini CLI sessions. With `save_memory`, you can direct the CLI to remember key details across sessions, providing personalized and directed assistance.
Use `save_memory` to save and recall information across your Qwen Code sessions. With `save_memory`, you can direct the CLI to remember key details across sessions, providing personalized and directed assistance.
### Arguments
@@ -14,9 +14,9 @@ Use `save_memory` to save and recall information across your Gemini CLI sessions
## How to use `save_memory` with the Gemini CLI
The tool appends the provided `fact` to a special `GEMINI.md` file located in the user's home directory (`~/.gemini/GEMINI.md`). This file can be configured to have a different name.
The tool appends the provided `fact` to your context file in the user's home directory (`~/.qwen/QWEN.md` by default). This filename can be configured via `contextFileName`.
Once added, the facts are stored under a `## Gemini Added Memories` section. This file is loaded as context in subsequent sessions, allowing the CLI to recall the saved information.
Once added, the facts are stored under a `## Qwen Added Memories` section. This file is loaded as context in subsequent sessions, allowing the CLI to recall the saved information.
Usage:

View File

@@ -11,11 +11,13 @@ Use `read_many_files` to read content from multiple files specified by paths or
`read_many_files` can be used to perform tasks such as getting an overview of a codebase, finding where specific functionality is implemented, reviewing documentation, or gathering context from multiple configuration files.
**Note:** `read_many_files` looks for files following the provided paths or glob patterns. A directory path such as `"/docs"` will return an empty result; the tool requires a pattern such as `"/docs/*"` or `"/docs/*.md"` to identify the relevant files.
### Arguments
`read_many_files` takes the following arguments:
- `paths` (list[string], required): An array of glob patterns or paths relative to the tool's target directory (e.g., `["src/**/*.ts"]`, `["README.md", "docs/", "assets/logo.png"]`).
- `paths` (list[string], required): An array of glob patterns or paths relative to the tool's target directory (e.g., `["src/**/*.ts"]`, `["README.md", "docs/*", "assets/logo.png"]`).
- `exclude` (list[string], optional): Glob patterns for files/directories to exclude (e.g., `["**/*.log", "temp/"]`). These are added to default excludes if `useDefaultExcludes` is true.
- `include` (list[string], optional): Additional glob patterns to include. These are merged with `paths` (e.g., `["*.test.ts"]` to specifically add test files if they were broadly excluded, or `["images/*.jpg"]` to include specific image types).
- `recursive` (boolean, optional): Whether to search recursively. This is primarily controlled by `**` in glob patterns. Defaults to `true`.
@@ -50,7 +52,7 @@ Read the main README, all Markdown files in the `docs` directory, and a specific
read_many_files(paths=["README.md", "docs/**/*.md", "assets/logo.png"], exclude=["docs/OLD_README.md"])
```
Read all JavaScript files but explicitly including test files and all JPEGs in an `images` folder:
Read all JavaScript files but explicitly include test files and all JPEGs in an `images` folder:
```
read_many_files(paths=["**/*.js"], include=["**/*.test.js", "images/**/*.jpg"], useDefaultExcludes=False)

View File

@@ -137,6 +137,5 @@ To block all shell commands, add the `run_shell_command` wildcard to `excludeToo
## Security Note for `excludeTools`
Command-specific restrictions in
`excludeTools` for `run_shell_command` are based on simple string matching and can be easily bypassed. This feature is **not a security mechanism** and should not be relied upon to safely execute untrusted code. It is recommended to use `coreTools` to explicitly select commands
Command-specific restrictions in `excludeTools` for `run_shell_command` are based on simple string matching and can be easily bypassed. This feature is **not a security mechanism** and should not be relied upon to safely execute untrusted code. It is recommended to use `coreTools` to explicitly select commands
that can be executed.

View File

@@ -4,24 +4,25 @@ This document describes the `web_fetch` tool for the Gemini CLI.
## Description
Use `web_fetch` to summarize, compare, or extract information from web pages. The `web_fetch` tool processes content from one or more URLs (up to 20) embedded in a prompt. `web_fetch` takes a natural language prompt and returns a generated response.
Use `web_fetch` to fetch content from a specified URL and process it using an AI model. The tool takes a URL and a prompt as input, fetches the URL content, converts HTML to markdown, and processes the content with the prompt using a small, fast model.
### Arguments
`web_fetch` takes one argument:
`web_fetch` takes two arguments:
- `prompt` (string, required): A comprehensive prompt that includes the URL(s) (up to 20) to fetch and specific instructions on how to process their content. For example: `"Summarize https://example.com/article and extract key points from https://another.com/data"`. The prompt must contain at least one URL starting with `http://` or `https://`.
- `url` (string, required): The URL to fetch content from. Must be a fully-formed valid URL starting with `http://` or `https://`.
- `prompt` (string, required): The prompt describing what information you want to extract from the page content.
## How to use `web_fetch` with the Gemini CLI
To use `web_fetch` with the Gemini CLI, provide a natural language prompt that contains URLs. The tool will ask for confirmation before fetching any URLs. Once confirmed, the tool will process URLs through Gemini API's `urlContext`.
To use `web_fetch` with the Gemini CLI, provide a URL and a prompt describing what you want to extract from that URL. The tool will ask for confirmation before fetching the URL. Once confirmed, the tool will fetch the content directly and process it using an AI model.
If the Gemini API cannot access the URL, the tool will fall back to fetching content directly from the local machine. The tool will format the response, including source attribution and citations where possible. The tool will then provide the response to the user.
The tool automatically converts HTML to text, handles GitHub blob URLs (converting them to raw URLs), and upgrades HTTP URLs to HTTPS for security.
Usage:
```
web_fetch(prompt="Your prompt, including a URL such as https://google.com.")
web_fetch(url="https://example.com", prompt="Summarize the main points of this article")
```
## `web_fetch` examples
@@ -29,16 +30,25 @@ web_fetch(prompt="Your prompt, including a URL such as https://google.com.")
Summarize a single article:
```
web_fetch(prompt="Can you summarize the main points of https://example.com/news/latest")
web_fetch(url="https://example.com/news/latest", prompt="Can you summarize the main points of this article?")
```
Compare two articles:
Extract specific information:
```
web_fetch(prompt="What are the differences in the conclusions of these two papers: https://arxiv.org/abs/2401.0001 and https://arxiv.org/abs/2401.0002?")
web_fetch(url="https://arxiv.org/abs/2401.0001", prompt="What are the key findings and methodology described in this paper?")
```
Analyze GitHub documentation:
```
web_fetch(url="https://github.com/google/gemini-react/blob/main/README.md", prompt="What are the installation steps and main features?")
```
## Important notes
- **URL processing:** `web_fetch` relies on the Gemini API's ability to access and process the given URLs.
- **Single URL processing:** `web_fetch` processes one URL at a time. To analyze multiple URLs, make separate calls to the tool.
- **URL format:** The tool automatically upgrades HTTP URLs to HTTPS and converts GitHub blob URLs to raw format for better content access.
- **Content processing:** The tool fetches content directly and processes it using an AI model, converting HTML to readable text format.
- **Output quality:** The quality of the output will depend on the clarity of the instructions in the prompt.
- **MCP tools:** If an MCP-provided web fetch tool is available (starting with "mcp\_\_"), prefer using that tool as it may have fewer restrictions.

View File

@@ -1,36 +1,43 @@
# Web Search Tool (`google_web_search`)
# Web Search Tool (`web_search`)
This document describes the `google_web_search` tool.
This document describes the `web_search` tool.
## Description
Use `google_web_search` to perform a web search using Google Search via the Gemini API. The `google_web_search` tool returns a summary of web results with sources.
Use `web_search` to perform a web search using the Tavily API. The tool returns a concise answer with sources when possible.
### Arguments
`google_web_search` takes one argument:
`web_search` takes one argument:
- `query` (string, required): The search query.
## How to use `google_web_search` with the Gemini CLI
## How to use `web_search`
The `google_web_search` tool sends a query to the Gemini API, which then performs a web search. `google_web_search` will return a generated response based on the search results, including citations and sources.
`web_search` calls the Tavily API directly. You must configure the `TAVILY_API_KEY` through one of the following methods:
1. **Settings file**: Add `"tavilyApiKey": "your-key-here"` to your `settings.json`
2. **Environment variable**: Set `TAVILY_API_KEY` in your environment or `.env` file
3. **Command line**: Use `--tavily-api-key your-key-here` when running the CLI
If the key is not configured, the tool will be disabled and skipped.
Usage:
```
google_web_search(query="Your query goes here.")
web_search(query="Your query goes here.")
```
## `google_web_search` examples
## `web_search` examples
Get information on a topic:
```
google_web_search(query="latest advancements in AI-powered code generation")
web_search(query="latest advancements in AI-powered code generation")
```
## Important notes
- **Response returned:** The `google_web_search` tool returns a processed summary, not a raw list of search results.
- **Citations:** The response includes citations to the sources used to generate the summary.
- **Response returned:** The `web_search` tool returns a concise answer when available, with a list of source links.
- **Citations:** Source links are appended as a numbered list.
- **API key:** Configure `TAVILY_API_KEY` via settings.json, environment variables, .env files, or command line arguments. If not configured, the tool is not registered.

View File

@@ -63,6 +63,8 @@ You may opt-out from sending Usage Statistics to Google by following the instruc
Whether your code, including prompts and answers, is used to train Google's models depends on the type of authentication method you use and your account type.
By default (if you have not opted out):
- **Google account with Gemini Code Assist for Individuals**: Yes. When you use your personal Google account, the [Gemini Code Assist Privacy Notice for Individuals](https://developers.google.com/gemini-code-assist/resources/privacy-notice-gemini-code-assist-individuals) applies. Under this notice,
your **prompts, answers, and related code are collected** and may be used to improve Google's products, including for model training.
- **Google account with Gemini Code Assist for Workspace, Standard, or Enterprise**: No. For these accounts, your data is governed by the [Gemini Code Assist Privacy Notices](https://cloud.google.com/gemini/docs/codeassist/security-privacy-compliance#standard_and_enterprise_data_protection_and_privacy) terms, which treat your inputs as confidential. Your **prompts, answers, and related code are not collected** and are not used to train models.
@@ -71,17 +73,21 @@ Whether your code, including prompts and answers, is used to train Google's mode
- **Paid services**: No. When you use the Gemini API key via the Gemini Developer API with a paid service, the [Gemini API Terms of Service - Paid Services](https://ai.google.dev/gemini-api/terms#paid-services) terms apply, which treats your inputs as confidential. Your **prompts, answers, and related code are not collected** and are not used to train models.
- **Gemini API key via the Vertex AI GenAI API**: No. For these accounts, your data is governed by the [Google Cloud Privacy Notice](https://cloud.google.com/terms/cloud-privacy-notice) terms, which treat your inputs as confidential. Your **prompts, answers, and related code are not collected** and are not used to train models.
For more information about opting out, refer to the next question.
### 2. What are Usage Statistics and what does the opt-out control?
The **Usage Statistics** setting is the single control for all optional data collection in the Gemini CLI.
The data it collects depends on your account and authentication type:
- **Google account with Gemini Code Assist for Individuals**: When enabled, this setting allows Google to collect both anonymous telemetry (for example, commands run and performance metrics) and **your prompts and answers** for model improvement.
- **Google account with Gemini Code Assist for Workspace, Standard, or Enterprise**: This setting only controls the collection of anonymous telemetry. Your prompts and answers are never collected, regardless of this setting.
- **Google account with Gemini Code Assist for Individuals**: When enabled, this setting allows Google to collect both anonymous telemetry (for example, commands run and performance metrics) and **your prompts and answers, including code,** for model improvement.
- **Google account with Gemini Code Assist for Workspace, Standard, or Enterprise**: This setting only controls the collection of anonymous telemetry. Your prompts and answers, including code, are never collected, regardless of this setting.
- **Gemini API key via the Gemini Developer API**:
**Unpaid services**: When enabled, this setting allows Google to collect both anonymous telemetry (like commands run and performance metrics) and **your prompts and answers** for model improvement. When disabled we will use your data as described in [How Google Uses Your Data](https://ai.google.dev/gemini-api/terms#data-use-unpaid).
**Unpaid services**: When enabled, this setting allows Google to collect both anonymous telemetry (like commands run and performance metrics) and **your prompts and answers, including code,** for model improvement. When disabled we will use your data as described in [How Google Uses Your Data](https://ai.google.dev/gemini-api/terms#data-use-unpaid).
**Paid services**: This setting only controls the collection of anonymous telemetry. Google logs prompts and responses for a limited period of time, solely for the purpose of detecting violations of the Prohibited Use Policy and any required legal or regulatory disclosures.
- **Gemini API key via the Vertex AI GenAI API:** This setting only controls the collection of anonymous telemetry. Your prompts and answers are never collected, regardless of this setting.
- **Gemini API key via the Vertex AI GenAI API:** This setting only controls the collection of anonymous telemetry. Your prompts and answers, including code, are never collected, regardless of this setting.
Please refer to the Privacy Notice that applies to your authentication method for more information about what data is collected and how this data is used.
You can disable Usage Statistics for any account type by following the instructions in the [Usage Statistics Configuration](./cli/configuration.md#usage-statistics) documentation.

View File

@@ -1,28 +1,38 @@
# Troubleshooting Guide
# Troubleshooting guide
This guide provides solutions to common issues and debugging tips.
This guide provides solutions to common issues and debugging tips, including topics on:
## Authentication
- Authentication or login errors
- Frequently asked questions (FAQs)
- Debugging tips
- Existing GitHub Issues similar to yours or creating new Issues
## Authentication or login errors
- **Error: `Failed to login. Message: Request contains an invalid argument`**
- Users with Google Workspace accounts, or users with Google Cloud accounts
- Users with Google Workspace accounts or Google Cloud accounts
associated with their Gmail accounts may not be able to activate the free
tier of the Google Code Assist plan.
- For Google Cloud accounts, you can work around this by setting
`GOOGLE_CLOUD_PROJECT` to your project ID.
- You can also grab an API key from [AI Studio](https://aistudio.google.com/app/apikey), which also includes a
- Alternatively, you can obtain the Gemini API key from
[Google AI Studio](http://aistudio.google.com/app/apikey), which also includes a
separate free tier.
## Frequently asked questions (FAQs)
- **Q: How do I update Gemini CLI to the latest version?**
- A: If installed globally via npm, update Gemini CLI using the command `npm install -g @google/gemini-cli@latest`. If run from source, pull the latest changes from the repository and rebuild using `npm run build`.
- A: If you installed it globally via `npm`, update it using the command `npm install -g @google/gemini-cli@latest`. If you compiled it from source, pull the latest changes from the repository, and then rebuild using the command `npm run build`.
- **Q: Where are Gemini CLI configuration files stored?**
- A: The CLI configuration is stored within two `settings.json` files: one in your home directory and one in your project's root directory. In both locations, `settings.json` is found in the `.gemini/` folder. Refer to [CLI Configuration](./cli/configuration.md) for more details.
- **Q: Where are the Gemini CLI configuration or settings files stored?**
- A: The Gemini CLI configuration is stored in two `settings.json` files:
1. In your home directory: `~/.gemini/settings.json`.
2. In your project's root directory: `./.gemini/settings.json`.
Refer to [Gemini CLI Configuration](./cli/configuration.md) for more details.
- **Q: Why don't I see cached token counts in my stats output?**
- A: Cached token information is only displayed when cached tokens are being used. This feature is available for API key users (Gemini API key or Vertex AI) but not for OAuth users (Google Personal/Enterprise accounts) at this time, as the Code Assist API does not support cached content creation. You can still view your total token usage with the `/stats` command.
- A: Cached token information is only displayed when cached tokens are being used. This feature is available for API key users (Gemini API key or Google Cloud Vertex AI) but not for OAuth users (such as Google Personal/Enterprise accounts like Google Gmail or Google Workspace, respectively). This is because the Gemini Code Assist API does not support cached content creation. You can still view your total token usage using the `/stats` command in Gemini CLI.
## Common error messages and solutions
@@ -31,28 +41,34 @@ This guide provides solutions to common issues and debugging tips.
- **Solution:**
Either stop the other process that is using the port or configure the MCP server to use a different port.
- **Error: Command not found (when attempting to run Gemini CLI).**
- **Cause:** Gemini CLI is not correctly installed or not in your system's PATH.
- **Error: Command not found (when attempting to run Gemini CLI with `gemini`).**
- **Cause:** Gemini CLI is not correctly installed or it is not in your system's `PATH`.
- **Solution:**
1. Ensure Gemini CLI installation was successful.
2. If installed globally, check that your npm global binary directory is in your PATH.
3. If running from source, ensure you are using the correct command to invoke it (e.g., `node packages/cli/dist/index.js ...`).
The update depends on how you installed Gemini CLI:
- If you installed `gemini` globally, check that your `npm` global binary directory is in your `PATH`. You can update Gemini CLI using the command `npm install -g @google/gemini-cli@latest`.
- If you are running `gemini` from source, ensure you are using the correct command to invoke it (e.g., `node packages/cli/dist/index.js ...`). To update Gemini CLI, pull the latest changes from the repository, and then rebuild using the command `npm run build`.
- **Error: `MODULE_NOT_FOUND` or import errors.**
- **Cause:** Dependencies are not installed correctly, or the project hasn't been built.
- **Solution:**
1. Run `npm install` to ensure all dependencies are present.
2. Run `npm run build` to compile the project.
3. Verify that the build completed successfully with `npm run start`.
- **Error: "Operation not permitted", "Permission denied", or similar.**
- **Cause:** If sandboxing is enabled, then the application is likely attempting an operation restricted by your sandbox, such as writing outside the project directory or system temp directory.
- **Solution:** See [Sandboxing](./cli/configuration.md#sandboxing) for more information, including how to customize your sandbox configuration.
- **Cause:** When sandboxing is enabled, Gemini CLI may attempt operations that are restricted by your sandbox configuration, such as writing outside the project directory or system temp directory.
- **Solution:** Refer to the [Configuration: Sandboxing](./cli/configuration.md#sandboxing) documentation for more information, including how to customize your sandbox configuration.
- **CLI is not interactive in "CI" environments**
- **Issue:** The CLI does not enter interactive mode (no prompt appears) if an environment variable starting with `CI_` (e.g., `CI_TOKEN`) is set. This is because the `is-in-ci` package, used by the underlying UI framework, detects these variables and assumes a non-interactive CI environment.
- **Cause:** The `is-in-ci` package checks for the presence of `CI`, `CONTINUOUS_INTEGRATION`, or any environment variable with a `CI_` prefix. When any of these are found, it signals that the environment is non-interactive, which prevents the CLI from starting in its interactive mode.
- **Gemini CLI is not running in interactive mode in "CI" environments**
- **Issue:** The Gemini CLI does not enter interactive mode (no prompt appears) if an environment variable starting with `CI_` (e.g., `CI_TOKEN`) is set. This is because the `is-in-ci` package, used by the underlying UI framework, detects these variables and assumes a non-interactive CI environment.
- **Cause:** The `is-in-ci` package checks for the presence of `CI`, `CONTINUOUS_INTEGRATION`, or any environment variable with a `CI_` prefix. When any of these are found, it signals that the environment is non-interactive, which prevents the Gemini CLI from starting in its interactive mode.
- **Solution:** If the `CI_` prefixed variable is not needed for the CLI to function, you can temporarily unset it for the command. e.g., `env -u CI_TOKEN gemini`
- **DEBUG mode not working from project .env file**
- **Issue:** Setting `DEBUG=true` in a project's `.env` file doesn't enable debug mode for gemini-cli.
- **Cause:** The `DEBUG` and `DEBUG_MODE` variables are automatically excluded from project `.env` files to prevent interference with gemini-cli behavior.
- **Solution:** Use a `.gemini/.env` file instead, or configure the `excludedProjectEnvVars` setting in your `settings.json` to exclude fewer variables.
## Debugging Tips
- **CLI debugging:**
@@ -67,9 +83,11 @@ This guide provides solutions to common issues and debugging tips.
- **Tool issues:**
- If a specific tool is failing, try to isolate the issue by running the simplest possible version of the command or operation the tool performs.
- For `run_shell_command`, check that the command works directly in your shell first.
- For file system tools, double-check paths and permissions.
- For _file system tools_, verify that paths are correct and check the permissions.
- **Pre-flight checks:**
- Always run `npm run preflight` before committing code. This can catch many common issues related to formatting, linting, and type errors.
If you encounter an issue not covered here, consider searching the project's issue tracker on GitHub or reporting a new issue with detailed information.
## Existing GitHub Issues similar to yours or creating new Issues
If you encounter an issue that was not covered here in this _Troubleshooting guide_, consider searching the Gemini CLI [Issue tracker on GitHub](https://github.com/google-gemini/gemini-cli/issues). If you can't find an issue similar to yours, consider creating a new GitHub Issue with a detailed description. Pull requests are also welcome!

View File

@@ -34,6 +34,8 @@ export default tseslint.config(
'packages/server/dist/**',
'packages/vscode-ide-companion/dist/**',
'bundle/**',
'package/bundle/**',
'.integration-tests/**',
],
},
eslint.configs.recommended,
@@ -150,24 +152,6 @@ export default tseslint.config(
'default-case': 'error',
},
},
{
files: ['./**/*.{tsx,ts,js}'],
plugins: {
'license-header': licenseHeader,
},
rules: {
'license-header/header': [
'error',
[
'/**',
' * @license',
' * Copyright 2025 Google LLC',
' * SPDX-License-Identifier: Apache-2.0',
' */',
],
],
},
},
// extra settings for scripts that we run directly with node
{
files: ['./scripts/**/*.js', 'esbuild.config.js'],
@@ -203,6 +187,21 @@ export default tseslint.config(
'@typescript-eslint/no-require-imports': 'off',
},
},
// extra settings for scripts that we run directly with node
{
files: ['packages/vscode-ide-companion/scripts/**/*.js'],
languageOptions: {
globals: {
...globals.node,
process: 'readonly',
console: 'readonly',
},
},
rules: {
'no-restricted-syntax': 'off',
'@typescript-eslint/no-require-imports': 'off',
},
},
// Prettier config must be last
prettierConfig,
// extra settings for scripts that we run directly with node

View File

@@ -1,30 +0,0 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import { strict as assert } from 'assert';
import { test } from 'node:test';
import { TestRig } from './test-helper.js';
test('reads a file', (t) => {
const rig = new TestRig();
rig.setup(t.name);
rig.createFile('test.txt', 'hello world');
const output = rig.run(`read the file name test.txt`);
assert.ok(output.toLowerCase().includes('hello'));
});
test('writes a file', (t) => {
const rig = new TestRig();
rig.setup(t.name);
rig.createFile('test.txt', '');
rig.run(`edit test.txt to have a hello world message`);
const fileContent = rig.readFile('test.txt');
assert.ok(fileContent.toLowerCase().includes('hello'));
});

View File

@@ -0,0 +1,89 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import { strict as assert } from 'assert';
import { test } from 'node:test';
import { TestRig, printDebugInfo, validateModelOutput } from './test-helper.js';
test('should be able to read a file', async () => {
const rig = new TestRig();
await rig.setup('should be able to read a file');
rig.createFile('test.txt', 'hello world');
const result = await rig.run(
`read the file test.txt and show me its contents`,
);
const foundToolCall = await rig.waitForToolCall('read_file');
// Add debugging information
if (!foundToolCall || !result.includes('hello world')) {
printDebugInfo(rig, result, {
'Found tool call': foundToolCall,
'Contains hello world': result.includes('hello world'),
});
}
assert.ok(foundToolCall, 'Expected to find a read_file tool call');
// Validate model output - will throw if no output, warn if missing expected content
validateModelOutput(result, 'hello world', 'File read test');
});
test('should be able to write a file', async () => {
const rig = new TestRig();
await rig.setup('should be able to write a file');
rig.createFile('test.txt', '');
const result = await rig.run(`edit test.txt to have a hello world message`);
// Accept multiple valid tools for editing files
const foundToolCall = await rig.waitForAnyToolCall([
'write_file',
'edit',
'replace',
]);
// Add debugging information
if (!foundToolCall) {
printDebugInfo(rig, result);
}
assert.ok(
foundToolCall,
'Expected to find a write_file, edit, or replace tool call',
);
// Validate model output - will throw if no output
validateModelOutput(result, null, 'File write test');
const fileContent = rig.readFile('test.txt');
// Add debugging for file content
if (!fileContent.toLowerCase().includes('hello')) {
const writeCalls = rig
.readToolLogs()
.filter((t) => t.toolRequest.name === 'write_file')
.map((t) => t.toolRequest.args);
printDebugInfo(rig, result, {
'File content mismatch': true,
'Expected to contain': 'hello',
'Actual content': fileContent,
'Write tool calls': JSON.stringify(writeCalls),
});
}
assert.ok(
fileContent.toLowerCase().includes('hello'),
'Expected file to contain hello',
);
// Log success info if verbose
if (process.env.VERBOSE === 'true') {
console.log('File written successfully with hello message.');
}
});

View File

@@ -1,19 +0,0 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import { test } from 'node:test';
import { strict as assert } from 'assert';
import { TestRig } from './test-helper.js';
test('should be able to search the web', async (t) => {
const rig = new TestRig();
rig.setup(t.name);
const prompt = `what planet do we live on`;
const result = await rig.run(prompt);
assert.ok(result.toLowerCase().includes('earth'));
});

View File

@@ -1,24 +0,0 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import { test } from 'node:test';
import { strict as assert } from 'assert';
import { TestRig } from './test-helper.js';
test('should be able to list a directory', async (t) => {
const rig = new TestRig();
rig.setup(t.name);
rig.createFile('file1.txt', 'file 1 content');
rig.mkdir('subdir');
rig.sync();
const prompt = `Can you list the files in the current directory. Display them in the style of 'ls'`;
const result = rig.run(prompt);
const lines = result.split('\n').filter((line) => line.trim() !== '');
assert.ok(lines.some((line) => line.includes('file1.txt')));
assert.ok(lines.some((line) => line.includes('subdir')));
});

View File

@@ -0,0 +1,62 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import { test } from 'node:test';
import { strict as assert } from 'assert';
import { TestRig, printDebugInfo, validateModelOutput } from './test-helper.js';
import { existsSync } from 'fs';
import { join } from 'path';
test('should be able to list a directory', async () => {
const rig = new TestRig();
await rig.setup('should be able to list a directory');
rig.createFile('file1.txt', 'file 1 content');
rig.mkdir('subdir');
rig.sync();
// Poll for filesystem changes to propagate in containers
await rig.poll(
() => {
// Check if the files exist in the test directory
const file1Path = join(rig.testDir!, 'file1.txt');
const subdirPath = join(rig.testDir!, 'subdir');
return existsSync(file1Path) && existsSync(subdirPath);
},
1000, // 1 second max wait
50, // check every 50ms
);
const prompt = `Can you list the files in the current directory. Display them in the style of 'ls'`;
const result = await rig.run(prompt);
const foundToolCall = await rig.waitForToolCall('list_directory');
// Add debugging information
if (
!foundToolCall ||
!result.includes('file1.txt') ||
!result.includes('subdir')
) {
const allTools = printDebugInfo(rig, result, {
'Found tool call': foundToolCall,
'Contains file1.txt': result.includes('file1.txt'),
'Contains subdir': result.includes('subdir'),
});
console.error(
'List directory calls:',
allTools
.filter((t) => t.toolRequest.name === 'list_directory')
.map((t) => t.toolRequest.args),
);
}
assert.ok(foundToolCall, 'Expected to find a list_directory tool call');
// Validate model output - will throw if no output, warn if missing expected content
validateModelOutput(result, ['file1.txt', 'subdir'], 'List directory test');
});

View File

@@ -0,0 +1,199 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
/**
* This test verifies we can match maximum schema depth errors from Gemini
* and then detect and warn about the potential tools that caused the error.
*/
import { test, describe, before } from 'node:test';
import { strict as assert } from 'node:assert';
import { TestRig } from './test-helper.js';
import { join } from 'path';
import { fileURLToPath } from 'url';
import { writeFileSync } from 'fs';
const __dirname = fileURLToPath(new URL('.', import.meta.url));
// Create a minimal MCP server that doesn't require external dependencies
// This implements the MCP protocol directly using Node.js built-ins
const serverScript = `#!/usr/bin/env node
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
const readline = require('readline');
const fs = require('fs');
// Debug logging to stderr (only when MCP_DEBUG or VERBOSE is set)
const debugEnabled = process.env.MCP_DEBUG === 'true' || process.env.VERBOSE === 'true';
function debug(msg) {
if (debugEnabled) {
fs.writeSync(2, \`[MCP-DEBUG] \${msg}\\n\`);
}
}
debug('MCP server starting...');
// Simple JSON-RPC implementation for MCP
class SimpleJSONRPC {
constructor() {
this.handlers = new Map();
this.rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
terminal: false
});
this.rl.on('line', (line) => {
debug(\`Received line: \${line}\`);
try {
const message = JSON.parse(line);
debug(\`Parsed message: \${JSON.stringify(message)}\`);
this.handleMessage(message);
} catch (e) {
debug(\`Parse error: \${e.message}\`);
}
});
}
send(message) {
const msgStr = JSON.stringify(message);
debug(\`Sending message: \${msgStr}\`);
process.stdout.write(msgStr + '\\n');
}
async handleMessage(message) {
if (message.method && this.handlers.has(message.method)) {
try {
const result = await this.handlers.get(message.method)(message.params || {});
if (message.id !== undefined) {
this.send({
jsonrpc: '2.0',
id: message.id,
result
});
}
} catch (error) {
if (message.id !== undefined) {
this.send({
jsonrpc: '2.0',
id: message.id,
error: {
code: -32603,
message: error.message
}
});
}
}
} else if (message.id !== undefined) {
this.send({
jsonrpc: '2.0',
id: message.id,
error: {
code: -32601,
message: 'Method not found'
}
});
}
}
on(method, handler) {
this.handlers.set(method, handler);
}
}
// Create MCP server
const rpc = new SimpleJSONRPC();
// Handle initialize
rpc.on('initialize', async (params) => {
debug('Handling initialize request');
return {
protocolVersion: '2024-11-05',
capabilities: {
tools: {}
},
serverInfo: {
name: 'cyclic-schema-server',
version: '1.0.0'
}
};
});
// Handle tools/list
rpc.on('tools/list', async () => {
debug('Handling tools/list request');
return {
tools: [{
name: 'tool_with_cyclic_schema',
inputSchema: {
type: 'object',
properties: {
data: {
type: 'array',
items: {
type: 'object',
properties: {
child: { $ref: '#/properties/data/items' },
},
},
},
},
}
}]
};
});
// Send initialization notification
rpc.send({
jsonrpc: '2.0',
method: 'initialized'
});
`;
describe('mcp server with cyclic tool schema is detected', () => {
const rig = new TestRig();
before(async () => {
// Setup test directory with MCP server configuration
await rig.setup('cyclic-schema-mcp-server', {
settings: {
mcpServers: {
'cyclic-schema-server': {
command: 'node',
args: ['mcp-server.cjs'],
},
},
},
});
// Create server script in the test directory
const testServerPath = join(rig.testDir, 'mcp-server.cjs');
writeFileSync(testServerPath, serverScript);
// Make the script executable (though running with 'node' should work anyway)
if (process.platform !== 'win32') {
const { chmodSync } = await import('fs');
chmodSync(testServerPath, 0o755);
}
});
test('should error and suggest disabling the cyclic tool', async () => {
// Just run any command to trigger the schema depth error.
// If this test starts failing, check `isSchemaDepthError` from
// geminiChat.ts to see if it needs to be updated.
// Or, possibly it could mean that gemini has fixed the issue.
const output = await rig.run('hello');
assert.match(
output,
/Skipping tool 'tool_with_cyclic_schema' from MCP server 'cyclic-schema-server' because it has missing types in its parameter schema/,
);
});
});

View File

@@ -1,22 +0,0 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import { test } from 'node:test';
import { strict as assert } from 'assert';
import { TestRig } from './test-helper.js';
test.skip('should be able to read multiple files', async (t) => {
const rig = new TestRig();
rig.setup(t.name);
rig.createFile('file1.txt', 'file 1 content');
rig.createFile('file2.txt', 'file 2 content');
const prompt = `Read the files in this directory, list them and print them to the screen`;
const result = await rig.run(prompt);
assert.ok(result.includes('file 1 content'));
assert.ok(result.includes('file 2 content'));
});

View File

@@ -0,0 +1,50 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import { test } from 'node:test';
import { strict as assert } from 'assert';
import { TestRig, printDebugInfo, validateModelOutput } from './test-helper.js';
test('should be able to read multiple files', async () => {
const rig = new TestRig();
await rig.setup('should be able to read multiple files');
rig.createFile('file1.txt', 'file 1 content');
rig.createFile('file2.txt', 'file 2 content');
const prompt = `Please use read_many_files to read file1.txt and file2.txt and show me what's in them`;
const result = await rig.run(prompt);
// Check for either read_many_files or multiple read_file calls
const allTools = rig.readToolLogs();
const readManyFilesCall = await rig.waitForToolCall('read_many_files');
const readFileCalls = allTools.filter(
(t) => t.toolRequest.name === 'read_file',
);
// Accept either read_many_files OR at least 2 read_file calls
const foundValidPattern = readManyFilesCall || readFileCalls.length >= 2;
// Add debugging information
if (!foundValidPattern) {
printDebugInfo(rig, result, {
'read_many_files called': readManyFilesCall,
'read_file calls': readFileCalls.length,
});
}
assert.ok(
foundValidPattern,
'Expected to find either read_many_files or multiple read_file tool calls',
);
// Validate model output - will throw if no output, warn if missing expected content
validateModelOutput(
result,
['file 1 content', 'file 2 content'],
'Read many files test',
);
});

View File

@@ -1,22 +0,0 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import { test } from 'node:test';
import { strict as assert } from 'assert';
import { TestRig } from './test-helper.js';
test('should be able to replace content in a file', async (t) => {
const rig = new TestRig();
rig.setup(t.name);
const fileName = 'file_to_replace.txt';
rig.createFile(fileName, 'original content');
const prompt = `Can you replace 'original' with 'replaced' in the file 'file_to_replace.txt'`;
await rig.run(prompt);
const newFileContent = rig.readFile(fileName);
assert.strictEqual(newFileContent, 'replaced content');
});

View File

@@ -0,0 +1,66 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import { test } from 'node:test';
import { strict as assert } from 'assert';
import { TestRig, printDebugInfo, validateModelOutput } from './test-helper.js';
test('should be able to replace content in a file', async () => {
const rig = new TestRig();
await rig.setup('should be able to replace content in a file');
const fileName = 'file_to_replace.txt';
const originalContent = 'original content';
const expectedContent = 'replaced content';
rig.createFile(fileName, originalContent);
const prompt = `Can you replace 'original' with 'replaced' in the file 'file_to_replace.txt'`;
const result = await rig.run(prompt);
const foundToolCall = await rig.waitForToolCall('replace');
// Add debugging information
if (!foundToolCall) {
printDebugInfo(rig, result);
}
assert.ok(foundToolCall, 'Expected to find a replace tool call');
// Validate model output - will throw if no output, warn if missing expected content
validateModelOutput(
result,
['replaced', 'file_to_replace.txt'],
'Replace content test',
);
const newFileContent = rig.readFile(fileName);
// Add debugging for file content
if (newFileContent !== expectedContent) {
console.error('File content mismatch - Debug info:');
console.error('Expected:', expectedContent);
console.error('Actual:', newFileContent);
console.error(
'Tool calls:',
rig.readToolLogs().map((t) => ({
name: t.toolRequest.name,
args: t.toolRequest.args,
})),
);
}
assert.strictEqual(
newFileContent,
expectedContent,
'File content should be updated correctly',
);
// Log success info if verbose
if (process.env.VERBOSE === 'true') {
console.log('File replaced successfully. New content:', newFileContent);
}
});

View File

@@ -52,13 +52,13 @@ async function main() {
const testPatterns =
args.length > 0
? args.map((arg) => `integration-tests/${arg}.test.js`)
: ['integration-tests/*.test.js'];
? args.map((arg) => `integration-tests/${arg}.test.ts`)
: ['integration-tests/*.test.ts'];
const testFiles = glob.sync(testPatterns, { cwd: rootDir, absolute: true });
for (const testFile of testFiles) {
const testFileName = basename(testFile);
console.log(`\tFound test file: ${testFileName}`);
console.log(` Found test file: ${testFileName}`);
}
const MAX_RETRIES = 3;
@@ -92,7 +92,7 @@ async function main() {
}
nodeArgs.push(testFile);
const child = spawn('node', nodeArgs, {
const child = spawn('npx', ['tsx', ...nodeArgs], {
stdio: 'pipe',
env: {
...process.env,
@@ -101,6 +101,7 @@ async function main() {
KEEP_OUTPUT: keepOutput.toString(),
VERBOSE: verbose.toString(),
TEST_FILE_NAME: testFileName,
TELEMETRY_LOG_FILE: join(testFileDir, 'telemetry.log'),
},
});

View File

@@ -1,31 +0,0 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import { test } from 'node:test';
import { strict as assert } from 'assert';
import { TestRig } from './test-helper.js';
test('should be able to run a shell command', async (t) => {
const rig = new TestRig();
rig.setup(t.name);
rig.createFile('blah.txt', 'some content');
const prompt = `Can you use ls to list the contexts of the current folder`;
const result = rig.run(prompt);
assert.ok(result.includes('blah.txt'));
});
test('should be able to run a shell command via stdin', async (t) => {
const rig = new TestRig();
rig.setup(t.name);
rig.createFile('blah.txt', 'some content');
const prompt = `Can you use ls to list the contexts of the current folder`;
const result = rig.run({ stdin: prompt });
assert.ok(result.includes('blah.txt'));
});

View File

@@ -0,0 +1,63 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import { test } from 'node:test';
import { strict as assert } from 'assert';
import { TestRig, printDebugInfo, validateModelOutput } from './test-helper.js';
test('should be able to run a shell command', async () => {
const rig = new TestRig();
await rig.setup('should be able to run a shell command');
const prompt = `Please run the command "echo hello-world" and show me the output`;
const result = await rig.run(prompt);
const foundToolCall = await rig.waitForToolCall('run_shell_command');
// Add debugging information
if (!foundToolCall || !result.includes('hello-world')) {
printDebugInfo(rig, result, {
'Found tool call': foundToolCall,
'Contains hello-world': result.includes('hello-world'),
});
}
assert.ok(foundToolCall, 'Expected to find a run_shell_command tool call');
// Validate model output - will throw if no output, warn if missing expected content
// Model often reports exit code instead of showing output
validateModelOutput(
result,
['hello-world', 'exit code 0'],
'Shell command test',
);
});
test('should be able to run a shell command via stdin', async () => {
const rig = new TestRig();
await rig.setup('should be able to run a shell command via stdin');
const prompt = `Please run the command "echo test-stdin" and show me what it outputs`;
const result = await rig.run({ stdin: prompt });
const foundToolCall = await rig.waitForToolCall('run_shell_command');
// Add debugging information
if (!foundToolCall || !result.includes('test-stdin')) {
printDebugInfo(rig, result, {
'Test type': 'Stdin test',
'Found tool call': foundToolCall,
'Contains test-stdin': result.includes('test-stdin'),
});
}
assert.ok(foundToolCall, 'Expected to find a run_shell_command tool call');
// Validate model output - will throw if no output, warn if missing expected content
validateModelOutput(result, 'test-stdin', 'Shell command stdin test');
});

View File

@@ -1,21 +0,0 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import { test } from 'node:test';
import { strict as assert } from 'assert';
import { TestRig } from './test-helper.js';
test('should be able to save to memory', async (t) => {
const rig = new TestRig();
rig.setup(t.name);
const prompt = `remember that my favorite color is blue.
what is my favorite color? tell me that and surround it with $ symbol`;
const result = await rig.run(prompt);
assert.ok(result.toLowerCase().includes('$blue$'));
});

View File

@@ -0,0 +1,41 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import { test } from 'node:test';
import { strict as assert } from 'assert';
import { TestRig, printDebugInfo, validateModelOutput } from './test-helper.js';
test('should be able to save to memory', async () => {
const rig = new TestRig();
await rig.setup('should be able to save to memory');
const prompt = `remember that my favorite color is blue.
what is my favorite color? tell me that and surround it with $ symbol`;
const result = await rig.run(prompt);
const foundToolCall = await rig.waitForToolCall('save_memory');
// Add debugging information
if (!foundToolCall || !result.toLowerCase().includes('blue')) {
const allTools = printDebugInfo(rig, result, {
'Found tool call': foundToolCall,
'Contains blue': result.toLowerCase().includes('blue'),
});
console.error(
'Memory tool calls:',
allTools
.filter((t) => t.toolRequest.name === 'save_memory')
.map((t) => t.toolRequest.args),
);
}
assert.ok(foundToolCall, 'Expected to find a save_memory tool call');
// Validate model output - will throw if no output, warn if missing expected content
validateModelOutput(result, 'blue', 'Save memory test');
});

View File

@@ -1,70 +0,0 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import { test, describe, before, after } from 'node:test';
import { strict as assert } from 'node:assert';
import { TestRig } from './test-helper.js';
import { spawn } from 'child_process';
import { join } from 'path';
import { fileURLToPath } from 'url';
import { writeFileSync, unlinkSync } from 'fs';
const __dirname = fileURLToPath(new URL('.', import.meta.url));
const serverScriptPath = join(__dirname, './temp-server.js');
const serverScript = `
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import { z } from 'zod';
const server = new McpServer({
name: 'addition-server',
version: '1.0.0',
});
server.registerTool(
'add',
{
title: 'Addition Tool',
description: 'Add two numbers',
inputSchema: { a: z.number(), b: z.number() },
},
async ({ a, b }) => ({
content: [{ type: 'text', text: String(a + b) }],
}),
);
const transport = new StdioServerTransport();
await server.connect(transport);
`;
describe('simple-mcp-server', () => {
const rig = new TestRig();
let child;
before(() => {
writeFileSync(serverScriptPath, serverScript);
child = spawn('node', [serverScriptPath], {
stdio: ['pipe', 'pipe', 'pipe'],
});
child.stderr.on('data', (data) => {
console.error(`stderr: ${data}`);
});
// Wait for the server to be ready
return new Promise((resolve) => setTimeout(resolve, 2000));
});
after(() => {
child.kill();
unlinkSync(serverScriptPath);
});
test('should add two numbers', () => {
rig.setup('should add two numbers');
const output = rig.run('add 5 and 10');
assert.ok(output.includes('15'));
});
});

View File

@@ -0,0 +1,208 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
/**
* This test verifies MCP (Model Context Protocol) server integration.
* It uses a minimal MCP server implementation that doesn't require
* external dependencies, making it compatible with Docker sandbox mode.
*/
import { test, describe, before } from 'node:test';
import { strict as assert } from 'node:assert';
import { TestRig, validateModelOutput } from './test-helper.js';
import { join } from 'path';
import { writeFileSync } from 'fs';
// Create a minimal MCP server that doesn't require external dependencies
// This implements the MCP protocol directly using Node.js built-ins
const serverScript = `#!/usr/bin/env node
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
const readline = require('readline');
const fs = require('fs');
// Debug logging to stderr (only when MCP_DEBUG or VERBOSE is set)
const debugEnabled = process.env.MCP_DEBUG === 'true' || process.env.VERBOSE === 'true';
function debug(msg) {
if (debugEnabled) {
fs.writeSync(2, \`[MCP-DEBUG] \${msg}\\n\`);
}
}
debug('MCP server starting...');
// Simple JSON-RPC implementation for MCP
class SimpleJSONRPC {
constructor() {
this.handlers = new Map();
this.rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
terminal: false
});
this.rl.on('line', (line) => {
debug(\`Received line: \${line}\`);
try {
const message = JSON.parse(line);
debug(\`Parsed message: \${JSON.stringify(message)}\`);
this.handleMessage(message);
} catch (e) {
debug(\`Parse error: \${e.message}\`);
}
});
}
send(message) {
const msgStr = JSON.stringify(message);
debug(\`Sending message: \${msgStr}\`);
process.stdout.write(msgStr + '\\n');
}
async handleMessage(message) {
if (message.method && this.handlers.has(message.method)) {
try {
const result = await this.handlers.get(message.method)(message.params || {});
if (message.id !== undefined) {
this.send({
jsonrpc: '2.0',
id: message.id,
result
});
}
} catch (error) {
if (message.id !== undefined) {
this.send({
jsonrpc: '2.0',
id: message.id,
error: {
code: -32603,
message: error.message
}
});
}
}
} else if (message.id !== undefined) {
this.send({
jsonrpc: '2.0',
id: message.id,
error: {
code: -32601,
message: 'Method not found'
}
});
}
}
on(method, handler) {
this.handlers.set(method, handler);
}
}
// Create MCP server
const rpc = new SimpleJSONRPC();
// Handle initialize
rpc.on('initialize', async (params) => {
debug('Handling initialize request');
return {
protocolVersion: '2024-11-05',
capabilities: {
tools: {}
},
serverInfo: {
name: 'addition-server',
version: '1.0.0'
}
};
});
// Handle tools/list
rpc.on('tools/list', async () => {
debug('Handling tools/list request');
return {
tools: [{
name: 'add',
description: 'Add two numbers',
inputSchema: {
type: 'object',
properties: {
a: { type: 'number', description: 'First number' },
b: { type: 'number', description: 'Second number' }
},
required: ['a', 'b']
}
}]
};
});
// Handle tools/call
rpc.on('tools/call', async (params) => {
debug(\`Handling tools/call request for tool: \${params.name}\`);
if (params.name === 'add') {
const { a, b } = params.arguments;
return {
content: [{
type: 'text',
text: String(a + b)
}]
};
}
throw new Error('Unknown tool: ' + params.name);
});
// Send initialization notification
rpc.send({
jsonrpc: '2.0',
method: 'initialized'
});
`;
describe('simple-mcp-server', () => {
const rig = new TestRig();
before(async () => {
// Setup test directory with MCP server configuration
await rig.setup('simple-mcp-server', {
settings: {
mcpServers: {
'addition-server': {
command: 'node',
args: ['mcp-server.cjs'],
},
},
},
});
// Create server script in the test directory
const testServerPath = join(rig.testDir!, 'mcp-server.cjs');
writeFileSync(testServerPath, serverScript);
// Make the script executable (though running with 'node' should work anyway)
if (process.platform !== 'win32') {
const { chmodSync } = await import('fs');
chmodSync(testServerPath, 0o755);
}
});
test('should add two numbers', async () => {
// Test directory is already set up in before hook
// Just run the command - MCP server config is in settings.json
const output = await rig.run('add 5 and 10');
const foundToolCall = await rig.waitForToolCall('add');
assert.ok(foundToolCall, 'Expected to find an add tool call');
// Validate model output - will throw if no output, fail if missing expected content
validateModelOutput(output, '15', 'MCP server test');
assert.ok(output.includes('15'), 'Expected output to contain the sum (15)');
});
});

View File

@@ -1,101 +0,0 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import { execSync } from 'child_process';
import { mkdirSync, writeFileSync, readFileSync } from 'fs';
import { join, dirname } from 'path';
import { fileURLToPath } from 'url';
import { env } from 'process';
const __dirname = dirname(fileURLToPath(import.meta.url));
function sanitizeTestName(name) {
return name
.toLowerCase()
.replace(/[^a-z0-9]/g, '-')
.replace(/-+/g, '-');
}
export class TestRig {
constructor() {
this.bundlePath = join(__dirname, '..', 'bundle/gemini.js');
this.testDir = null;
}
setup(testName) {
this.testName = testName;
const sanitizedName = sanitizeTestName(testName);
this.testDir = join(env.INTEGRATION_TEST_FILE_DIR, sanitizedName);
mkdirSync(this.testDir, { recursive: true });
}
createFile(fileName, content) {
const filePath = join(this.testDir, fileName);
writeFileSync(filePath, content);
return filePath;
}
mkdir(dir) {
mkdirSync(join(this.testDir, dir));
}
sync() {
// ensure file system is done before spawning
execSync('sync', { cwd: this.testDir });
}
run(promptOrOptions, ...args) {
let command = `node ${this.bundlePath} --yolo`;
const execOptions = {
cwd: this.testDir,
encoding: 'utf-8',
};
if (typeof promptOrOptions === 'string') {
command += ` --prompt "${promptOrOptions}"`;
} else if (
typeof promptOrOptions === 'object' &&
promptOrOptions !== null
) {
if (promptOrOptions.prompt) {
command += ` --prompt "${promptOrOptions.prompt}"`;
}
if (promptOrOptions.stdin) {
execOptions.input = promptOrOptions.stdin;
}
}
command += ` ${args.join(' ')}`;
const output = execSync(command, execOptions);
if (env.KEEP_OUTPUT === 'true' || env.VERBOSE === 'true') {
const testId = `${env.TEST_FILE_NAME.replace(
'.test.js',
'',
)}:${this.testName.replace(/ /g, '-')}`;
console.log(`--- TEST: ${testId} ---`);
console.log(output);
console.log(`--- END TEST: ${testId} ---`);
}
return output;
}
readFile(fileName) {
const content = readFileSync(join(this.testDir, fileName), 'utf-8');
if (env.KEEP_OUTPUT === 'true' || env.VERBOSE === 'true') {
const testId = `${env.TEST_FILE_NAME.replace(
'.test.js',
'',
)}:${this.testName.replace(/ /g, '-')}`;
console.log(`--- FILE: ${testId}/${fileName} ---`);
console.log(content);
console.log(`--- END FILE: ${testId}/${fileName} ---`);
}
return content;
}
}

View File

@@ -0,0 +1,642 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import { execSync, spawn } from 'child_process';
import { parse } from 'shell-quote';
import { mkdirSync, writeFileSync, readFileSync } from 'fs';
import { join, dirname } from 'path';
import { fileURLToPath } from 'url';
import { env } from 'process';
import { fileExists } from '../scripts/telemetry_utils.js';
const __dirname = dirname(fileURLToPath(import.meta.url));
function sanitizeTestName(name: string) {
return name
.toLowerCase()
.replace(/[^a-z0-9]/g, '-')
.replace(/-+/g, '-');
}
// Helper to create detailed error messages
export function createToolCallErrorMessage(
expectedTools: string | string[],
foundTools: string[],
result: string,
) {
const expectedStr = Array.isArray(expectedTools)
? expectedTools.join(' or ')
: expectedTools;
return (
`Expected to find ${expectedStr} tool call(s). ` +
`Found: ${foundTools.length > 0 ? foundTools.join(', ') : 'none'}. ` +
`Output preview: ${result ? result.substring(0, 200) + '...' : 'no output'}`
);
}
// Helper to print debug information when tests fail
export function printDebugInfo(
rig: TestRig,
result: string,
context: Record<string, unknown> = {},
) {
console.error('Test failed - Debug info:');
console.error('Result length:', result.length);
console.error('Result (first 500 chars):', result.substring(0, 500));
console.error(
'Result (last 500 chars):',
result.substring(result.length - 500),
);
// Print any additional context provided
Object.entries(context).forEach(([key, value]) => {
console.error(`${key}:`, value);
});
// Check what tools were actually called
const allTools = rig.readToolLogs();
console.error(
'All tool calls found:',
allTools.map((t) => t.toolRequest.name),
);
return allTools;
}
// Helper to validate model output and warn about unexpected content
export function validateModelOutput(
result: string,
expectedContent: string | (string | RegExp)[] | null = null,
testName = '',
) {
// First, check if there's any output at all (this should fail the test if missing)
if (!result || result.trim().length === 0) {
throw new Error('Expected LLM to return some output');
}
// If expectedContent is provided, check for it and warn if missing
if (expectedContent) {
const contents = Array.isArray(expectedContent)
? expectedContent
: [expectedContent];
const missingContent = contents.filter((content) => {
if (typeof content === 'string') {
return !result.toLowerCase().includes(content.toLowerCase());
} else if (content instanceof RegExp) {
return !content.test(result);
}
return false;
});
if (missingContent.length > 0) {
console.warn(
`Warning: LLM did not include expected content in response: ${missingContent.join(', ')}.`,
'This is not ideal but not a test failure.',
);
console.warn(
'The tool was called successfully, which is the main requirement.',
);
return false;
} else if (process.env.VERBOSE === 'true') {
console.log(`${testName}: Model output validated successfully.`);
}
return true;
}
return true;
}
export class TestRig {
bundlePath: string;
testDir: string | null;
testName?: string;
_lastRunStdout?: string;
constructor() {
this.bundlePath = join(__dirname, '..', 'bundle/gemini.js');
this.testDir = null;
}
// Get timeout based on environment
getDefaultTimeout() {
if (env.CI) return 60000; // 1 minute in CI
if (env.GEMINI_SANDBOX) return 30000; // 30s in containers
return 15000; // 15s locally
}
setup(
testName: string,
options: { settings?: Record<string, unknown> } = {},
) {
this.testName = testName;
const sanitizedName = sanitizeTestName(testName);
this.testDir = join(env.INTEGRATION_TEST_FILE_DIR!, sanitizedName);
mkdirSync(this.testDir, { recursive: true });
// Create a settings file to point the CLI to the local collector
const geminiDir = join(this.testDir, '.qwen');
mkdirSync(geminiDir, { recursive: true });
// In sandbox mode, use an absolute path for telemetry inside the container
// The container mounts the test directory at the same path as the host
const telemetryPath =
env.GEMINI_SANDBOX && env.GEMINI_SANDBOX !== 'false'
? join(this.testDir, 'telemetry.log') // Absolute path in test directory
: env.TELEMETRY_LOG_FILE; // Absolute path for non-sandbox
const settings = {
telemetry: {
enabled: true,
target: 'local',
otlpEndpoint: '',
outfile: telemetryPath,
},
sandbox: env.GEMINI_SANDBOX !== 'false' ? env.GEMINI_SANDBOX : false,
...options.settings, // Allow tests to override/add settings
};
writeFileSync(
join(geminiDir, 'settings.json'),
JSON.stringify(settings, null, 2),
);
}
createFile(fileName: string, content: string) {
const filePath = join(this.testDir!, fileName);
writeFileSync(filePath, content);
return filePath;
}
mkdir(dir: string) {
mkdirSync(join(this.testDir!, dir), { recursive: true });
}
sync() {
// ensure file system is done before spawning
execSync('sync', { cwd: this.testDir! });
}
run(
promptOrOptions: string | { prompt?: string; stdin?: string },
...args: string[]
): Promise<string> {
let command = `node ${this.bundlePath} --yolo`;
const execOptions: {
cwd: string;
encoding: 'utf-8';
input?: string;
} = {
cwd: this.testDir!,
encoding: 'utf-8',
};
if (typeof promptOrOptions === 'string') {
command += ` --prompt ${JSON.stringify(promptOrOptions)}`;
} else if (
typeof promptOrOptions === 'object' &&
promptOrOptions !== null
) {
if (promptOrOptions.prompt) {
command += ` --prompt ${JSON.stringify(promptOrOptions.prompt)}`;
}
if (promptOrOptions.stdin) {
execOptions.input = promptOrOptions.stdin;
}
}
command += ` ${args.join(' ')}`;
const commandArgs = parse(command);
const node = commandArgs.shift() as string;
const child = spawn(node, commandArgs as string[], {
cwd: this.testDir!,
stdio: 'pipe',
});
let stdout = '';
let stderr = '';
// Handle stdin if provided
if (execOptions.input) {
child.stdin!.write(execOptions.input);
child.stdin!.end();
}
child.stdout!.on('data', (data: Buffer) => {
stdout += data;
if (env.KEEP_OUTPUT === 'true' || env.VERBOSE === 'true') {
process.stdout.write(data);
}
});
child.stderr!.on('data', (data: Buffer) => {
stderr += data;
if (env.KEEP_OUTPUT === 'true' || env.VERBOSE === 'true') {
process.stderr.write(data);
}
});
const promise = new Promise<string>((resolve, reject) => {
child.on('close', (code: number) => {
if (code === 0) {
// Store the raw stdout for Podman telemetry parsing
this._lastRunStdout = stdout;
// Filter out telemetry output when running with Podman
// Podman seems to output telemetry to stdout even when writing to file
let result = stdout;
if (env.GEMINI_SANDBOX === 'podman') {
// Remove telemetry JSON objects from output
// They are multi-line JSON objects that start with { and contain telemetry fields
const lines = result.split('\n');
const filteredLines = [];
let inTelemetryObject = false;
let braceDepth = 0;
for (const line of lines) {
if (!inTelemetryObject && line.trim() === '{') {
// Check if this might be start of telemetry object
inTelemetryObject = true;
braceDepth = 1;
} else if (inTelemetryObject) {
// Count braces to track nesting
for (const char of line) {
if (char === '{') braceDepth++;
else if (char === '}') braceDepth--;
}
// Check if we've closed all braces
if (braceDepth === 0) {
inTelemetryObject = false;
// Skip this line (the closing brace)
continue;
}
} else {
// Not in telemetry object, keep the line
filteredLines.push(line);
}
}
result = filteredLines.join('\n');
}
// If we have stderr output, include that also
if (stderr) {
result += `\n\nStdErr:\n${stderr}`;
}
resolve(result);
} else {
reject(new Error(`Process exited with code ${code}:\n${stderr}`));
}
});
});
return promise;
}
readFile(fileName: string) {
const content = readFileSync(join(this.testDir!, fileName), 'utf-8');
if (env.KEEP_OUTPUT === 'true' || env.VERBOSE === 'true') {
const testId = `${env.TEST_FILE_NAME!.replace(
'.test.js',
'',
)}:${this.testName!.replace(/ /g, '-')}`;
console.log(`--- FILE: ${testId}/${fileName} ---`);
console.log(content);
console.log(`--- END FILE: ${testId}/${fileName} ---`);
}
return content;
}
async cleanup() {
// Clean up test directory
if (this.testDir && !env.KEEP_OUTPUT) {
try {
execSync(`rm -rf ${this.testDir}`);
} catch (error) {
// Ignore cleanup errors
if (env.VERBOSE === 'true') {
console.warn('Cleanup warning:', (error as Error).message);
}
}
}
}
async waitForTelemetryReady() {
// In sandbox mode, telemetry is written to a relative path in the test directory
const logFilePath =
env.GEMINI_SANDBOX && env.GEMINI_SANDBOX !== 'false'
? join(this.testDir!, 'telemetry.log')
: env.TELEMETRY_LOG_FILE;
if (!logFilePath) return;
// Wait for telemetry file to exist and have content
await this.poll(
() => {
if (!fileExists(logFilePath)) return false;
try {
const content = readFileSync(logFilePath, 'utf-8');
// Check if file has meaningful content (at least one complete JSON object)
return content.includes('"event.name"');
} catch {
return false;
}
},
2000, // 2 seconds max - reduced since telemetry should flush on exit now
100, // check every 100ms
);
}
async waitForToolCall(toolName: string, timeout?: number) {
// Use environment-specific timeout
if (!timeout) {
timeout = this.getDefaultTimeout();
}
// Wait for telemetry to be ready before polling for tool calls
await this.waitForTelemetryReady();
return this.poll(
() => {
const toolLogs = this.readToolLogs();
return toolLogs.some((log) => log.toolRequest.name === toolName);
},
timeout,
100,
);
}
async waitForAnyToolCall(toolNames: string[], timeout?: number) {
// Use environment-specific timeout
if (!timeout) {
timeout = this.getDefaultTimeout();
}
// Wait for telemetry to be ready before polling for tool calls
await this.waitForTelemetryReady();
return this.poll(
() => {
const toolLogs = this.readToolLogs();
return toolNames.some((name) =>
toolLogs.some((log) => log.toolRequest.name === name),
);
},
timeout,
100,
);
}
async poll(
predicate: () => boolean,
timeout: number,
interval: number,
): Promise<boolean> {
const startTime = Date.now();
let attempts = 0;
while (Date.now() - startTime < timeout) {
attempts++;
const result = predicate();
if (env.VERBOSE === 'true' && attempts % 5 === 0) {
console.log(
`Poll attempt ${attempts}: ${result ? 'success' : 'waiting...'}`,
);
}
if (result) {
return true;
}
await new Promise((resolve) => setTimeout(resolve, interval));
}
if (env.VERBOSE === 'true') {
console.log(`Poll timed out after ${attempts} attempts`);
}
return false;
}
_parseToolLogsFromStdout(stdout: string) {
const logs: {
timestamp: number;
toolRequest: {
name: string;
args: string;
success: boolean;
duration_ms: number;
};
}[] = [];
// The console output from Podman is JavaScript object notation, not JSON
// Look for tool call events in the output
// Updated regex to handle tool names with hyphens and underscores
const toolCallPattern =
/body:\s*'Tool call:\s*([\w-]+)\..*?Success:\s*(\w+)\..*?Duration:\s*(\d+)ms\.'/g;
const matches = [...stdout.matchAll(toolCallPattern)];
for (const match of matches) {
const toolName = match[1];
const success = match[2] === 'true';
const duration = parseInt(match[3], 10);
// Try to find function_args nearby
const matchIndex = match.index || 0;
const contextStart = Math.max(0, matchIndex - 500);
const contextEnd = Math.min(stdout.length, matchIndex + 500);
const context = stdout.substring(contextStart, contextEnd);
// Look for function_args in the context
let args = '{}';
const argsMatch = context.match(/function_args:\s*'([^']+)'/);
if (argsMatch) {
args = argsMatch[1];
}
// Also try to find function_name to double-check
// Updated regex to handle tool names with hyphens and underscores
const nameMatch = context.match(/function_name:\s*'([\w-]+)'/);
const actualToolName = nameMatch ? nameMatch[1] : toolName;
logs.push({
timestamp: Date.now(),
toolRequest: {
name: actualToolName,
args: args,
success: success,
duration_ms: duration,
},
});
}
// If no matches found with the simple pattern, try the JSON parsing approach
// in case the format changes
if (logs.length === 0) {
const lines = stdout.split('\n');
let currentObject = '';
let inObject = false;
let braceDepth = 0;
for (const line of lines) {
if (!inObject && line.trim() === '{') {
inObject = true;
braceDepth = 1;
currentObject = line + '\n';
} else if (inObject) {
currentObject += line + '\n';
// Count braces
for (const char of line) {
if (char === '{') braceDepth++;
else if (char === '}') braceDepth--;
}
// If we've closed all braces, try to parse the object
if (braceDepth === 0) {
inObject = false;
try {
const obj = JSON.parse(currentObject);
// Check for tool call in different formats
if (
obj.body &&
obj.body.includes('Tool call:') &&
obj.attributes
) {
const bodyMatch = obj.body.match(/Tool call: (\w+)\./);
if (bodyMatch) {
logs.push({
timestamp: obj.timestamp || Date.now(),
toolRequest: {
name: bodyMatch[1],
args: obj.attributes.function_args || '{}',
success: obj.attributes.success !== false,
duration_ms: obj.attributes.duration_ms || 0,
},
});
}
} else if (
obj.attributes &&
obj.attributes['event.name'] === 'gemini_cli.tool_call'
) {
logs.push({
timestamp: obj.attributes['event.timestamp'],
toolRequest: {
name: obj.attributes.function_name,
args: obj.attributes.function_args,
success: obj.attributes.success,
duration_ms: obj.attributes.duration_ms,
},
});
}
} catch {
// Not valid JSON
}
currentObject = '';
}
}
}
}
return logs;
}
readToolLogs() {
// For Podman, first check if telemetry file exists and has content
// If not, fall back to parsing from stdout
if (env.GEMINI_SANDBOX === 'podman') {
// Try reading from file first
const logFilePath = join(this.testDir!, 'telemetry.log');
if (fileExists(logFilePath)) {
try {
const content = readFileSync(logFilePath, 'utf-8');
if (content && content.includes('"event.name"')) {
// File has content, use normal file parsing
// Continue to the normal file parsing logic below
} else if (this._lastRunStdout) {
// File exists but is empty or doesn't have events, parse from stdout
return this._parseToolLogsFromStdout(this._lastRunStdout);
}
} catch {
// Error reading file, fall back to stdout
if (this._lastRunStdout) {
return this._parseToolLogsFromStdout(this._lastRunStdout);
}
}
} else if (this._lastRunStdout) {
// No file exists, parse from stdout
return this._parseToolLogsFromStdout(this._lastRunStdout);
}
}
// In sandbox mode, telemetry is written to a relative path in the test directory
const logFilePath =
env.GEMINI_SANDBOX && env.GEMINI_SANDBOX !== 'false'
? join(this.testDir!, 'telemetry.log')
: env.TELEMETRY_LOG_FILE;
if (!logFilePath) {
console.warn(`TELEMETRY_LOG_FILE environment variable not set`);
return [];
}
// Check if file exists, if not return empty array (file might not be created yet)
if (!fileExists(logFilePath)) {
return [];
}
const content = readFileSync(logFilePath, 'utf-8');
// Split the content into individual JSON objects
// They are separated by "}\n{"
const jsonObjects = content
.split(/}\s*\n\s*{/)
.map((obj, index, array) => {
// Add back the braces we removed during split
if (index > 0) obj = '{' + obj;
if (index < array.length - 1) obj = obj + '}';
return obj.trim();
})
.filter((obj) => obj);
const logs: {
toolRequest: {
name: string;
args: string;
success: boolean;
duration_ms: number;
};
}[] = [];
for (const jsonStr of jsonObjects) {
try {
const logData = JSON.parse(jsonStr);
// Look for tool call logs
if (
logData.attributes &&
logData.attributes['event.name'] === 'qwen-code.tool_call'
) {
const toolName = logData.attributes.function_name;
logs.push({
toolRequest: {
name: toolName,
args: logData.attributes.function_args,
success: logData.attributes.success,
duration_ms: logData.attributes.duration_ms,
},
});
}
} catch (e) {
// Skip objects that aren't valid JSON
if (env.VERBOSE === 'true') {
console.error(
'Failed to parse telemetry object:',
(e as Error).message,
);
}
}
}
return logs;
}
}

View File

@@ -0,0 +1,8 @@
{
"extends": "../tsconfig.json",
"compilerOptions": {
"noEmit": true,
"allowJs": true
},
"include": ["**/*.ts"]
}

View File

@@ -0,0 +1,81 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import { test } from 'node:test';
import { strict as assert } from 'assert';
import { TestRig, printDebugInfo, validateModelOutput } from './test-helper.js';
test('should be able to search the web', async () => {
// Skip if Tavily key is not configured
if (!process.env.TAVILY_API_KEY) {
console.warn('Skipping web search test: TAVILY_API_KEY not set');
return;
}
const rig = new TestRig();
await rig.setup('should be able to search the web');
let result;
try {
result = await rig.run(`what is the weather in London`);
} catch (error) {
// Network errors can occur in CI environments
if (
error instanceof Error &&
(error.message.includes('network') || error.message.includes('timeout'))
) {
console.warn(
'Skipping test due to network error:',
(error as Error).message,
);
return; // Skip the test
}
throw error; // Re-throw if not a network error
}
const foundToolCall = await rig.waitForToolCall('web_search');
// Add debugging information
if (!foundToolCall) {
const allTools = printDebugInfo(rig, result);
// Check if the tool call failed due to network issues
const failedSearchCalls = allTools.filter(
(t) => t.toolRequest.name === 'web_search' && !t.toolRequest.success,
);
if (failedSearchCalls.length > 0) {
console.warn(
'web_search tool was called but failed, possibly due to network issues',
);
console.warn(
'Failed calls:',
failedSearchCalls.map((t) => t.toolRequest.args),
);
return; // Skip the test if network issues
}
}
assert.ok(foundToolCall, 'Expected to find a call to web_search');
// Validate model output - will throw if no output, warn if missing expected content
const hasExpectedContent = validateModelOutput(
result,
['weather', 'london'],
'Web search test',
);
// If content was missing, log the search queries used
if (!hasExpectedContent) {
const searchCalls = rig
.readToolLogs()
.filter((t) => t.toolRequest.name === 'web_search');
if (searchCalls.length > 0) {
console.warn(
'Search queries used:',
searchCalls.map((t) => t.toolRequest.args),
);
}
}
});

View File

@@ -1,21 +0,0 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import { test } from 'node:test';
import { strict as assert } from 'assert';
import { TestRig } from './test-helper.js';
test('should be able to write a file', async (t) => {
const rig = new TestRig();
rig.setup(t.name);
const prompt = `show me an example of using the write tool. put a dad joke in dad.txt`;
await rig.run(prompt);
const newFilePath = 'dad.txt';
const newFileContent = rig.readFile(newFilePath);
assert.notEqual(newFileContent, '');
});

View File

@@ -0,0 +1,68 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import { test } from 'node:test';
import { strict as assert } from 'assert';
import {
TestRig,
createToolCallErrorMessage,
printDebugInfo,
validateModelOutput,
} from './test-helper.js';
test('should be able to write a file', async () => {
const rig = new TestRig();
await rig.setup('should be able to write a file');
const prompt = `show me an example of using the write tool. put a dad joke in dad.txt`;
const result = await rig.run(prompt);
const foundToolCall = await rig.waitForToolCall('write_file');
// Add debugging information
if (!foundToolCall) {
printDebugInfo(rig, result);
}
const allTools = rig.readToolLogs();
assert.ok(
foundToolCall,
createToolCallErrorMessage(
'write_file',
allTools.map((t) => t.toolRequest.name),
result,
),
);
// Validate model output - will throw if no output, warn if missing expected content
validateModelOutput(result, 'dad.txt', 'Write file test');
const newFilePath = 'dad.txt';
const newFileContent = rig.readFile(newFilePath);
// Add debugging for file content
if (newFileContent === '') {
console.error('File was created but is empty');
console.error(
'Tool calls:',
rig.readToolLogs().map((t) => ({
name: t.toolRequest.name,
args: t.toolRequest.args,
})),
);
}
assert.notEqual(newFileContent, '', 'Expected file to have content');
// Log success info if verbose
if (process.env.VERBOSE === 'true') {
console.log(
'File created successfully with content:',
newFileContent.substring(0, 100) + '...',
);
}
});

2832
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
{
"name": "@qwen-code/qwen-code",
"version": "0.0.4",
"version": "0.0.8-nightly.5",
"engines": {
"node": ">=20.0.0"
},
@@ -13,7 +13,7 @@
"url": "git+https://github.com/QwenLM/qwen-code.git"
},
"config": {
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.0.4"
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.0.8-nightly.5"
},
"scripts": {
"start": "node scripts/start.js",
@@ -28,7 +28,7 @@
"build:packages": "npm run build --workspaces",
"build:sandbox": "node scripts/build_sandbox.js --skip-npm-install-build",
"bundle": "npm run generate && node esbuild.config.js && node scripts/copy_bundle_assets.js",
"test": "npm run test --workspaces",
"test": "npm run test --workspaces --if-present",
"test:ci": "npm run test:ci --workspaces --if-present && npm run test:scripts",
"test:scripts": "vitest run --config ./scripts/tests/vitest.config.ts",
"test:e2e": "npm run test:integration:sandbox:none -- --verbose --keep-output",
@@ -39,7 +39,7 @@
"lint": "eslint . --ext .ts,.tsx && eslint integration-tests",
"lint:fix": "eslint . --fix && eslint integration-tests --fix",
"lint:ci": "eslint . --ext .ts,.tsx --max-warnings 0 && eslint integration-tests --max-warnings 0",
"format": "prettier --write .",
"format": "prettier --experimental-cli --write .",
"typecheck": "npm run typecheck --workspaces --if-present",
"preflight": "npm run clean && npm ci && npm run format && npm run lint:ci && npm run build && npm run typecheck && npm run test:ci",
"prepare": "npm run bundle",
@@ -57,10 +57,12 @@
"LICENSE"
],
"devDependencies": {
"@types/marked": "^5.0.2",
"@types/micromatch": "^4.0.9",
"@types/mime-types": "^3.0.1",
"@types/minimatch": "^5.1.2",
"@types/mock-fs": "^4.13.4",
"@types/qrcode-terminal": "^0.12.2",
"@types/shell-quote": "^1.7.5",
"@types/uuid": "^10.0.0",
"@vitest/coverage-v8": "^3.1.1",
@@ -81,11 +83,10 @@
"mock-fs": "^5.5.0",
"prettier": "^3.5.3",
"react-devtools-core": "^4.28.5",
"tsx": "^4.20.3",
"typescript-eslint": "^8.30.1",
"vitest": "^3.2.4",
"yargs": "^17.7.2"
},
"dependencies": {
"tiktoken": "^1.0.21"
"yargs": "^17.7.2",
"mnemonist": "^0.40.3"
}
}

View File

@@ -1,6 +1,6 @@
{
"name": "@qwen-code/qwen-code",
"version": "0.0.4",
"version": "0.0.8-nightly.5",
"description": "Qwen Code",
"repository": {
"type": "git",
@@ -25,12 +25,13 @@
"dist"
],
"config": {
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.0.4"
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.0.8-nightly.5"
},
"dependencies": {
"@qwen-code/qwen-code-core": "file:../core",
"@google/genai": "1.9.0",
"@iarna/toml": "^2.2.5",
"@qwen-code/qwen-code-core": "file:../core",
"@modelcontextprotocol/sdk": "^1.15.1",
"@types/update-notifier": "^6.0.8",
"command-exists": "^1.2.9",
"diff": "^7.0.0",
@@ -46,12 +47,14 @@
"lowlight": "^3.3.0",
"mime-types": "^3.0.1",
"open": "^10.1.2",
"qrcode-terminal": "^0.12.0",
"react": "^19.1.0",
"read-package-up": "^11.0.0",
"shell-quote": "^1.8.3",
"string-width": "^7.1.0",
"strip-ansi": "^7.1.0",
"strip-json-comments": "^3.1.1",
"undici": "^7.10.0",
"update-notifier": "^7.3.1",
"yargs": "^17.7.2",
"zod": "^3.23.8"
@@ -73,7 +76,8 @@
"pretty-format": "^30.0.2",
"react-dom": "^19.1.0",
"typescript": "^5.3.3",
"vitest": "^3.1.1"
"vitest": "^3.1.1",
"@qwen-code/qwen-code-test-utils": "file:../test-utils"
},
"engines": {
"node": ">=20"

View File

@@ -239,65 +239,62 @@ class GeminiAgent implements Agent {
);
}
let toolCallId;
const confirmationDetails = await tool.shouldConfirmExecute(
args,
abortSignal,
);
if (confirmationDetails) {
let content: acp.ToolCallContent | null = null;
if (confirmationDetails.type === 'edit') {
content = {
type: 'diff',
path: confirmationDetails.fileName,
oldText: confirmationDetails.originalContent,
newText: confirmationDetails.newContent,
};
}
const result = await this.client.requestToolCallConfirmation({
label: tool.getDescription(args),
icon: tool.icon,
content,
confirmation: toAcpToolCallConfirmation(confirmationDetails),
locations: tool.toolLocations(args),
});
await confirmationDetails.onConfirm(toToolCallOutcome(result.outcome));
switch (result.outcome) {
case 'reject':
return errorResponse(
new Error(`Tool "${fc.name}" not allowed to run by the user.`),
);
case 'cancel':
return errorResponse(
new Error(`Tool "${fc.name}" was canceled by the user.`),
);
case 'allow':
case 'alwaysAllow':
case 'alwaysAllowMcpServer':
case 'alwaysAllowTool':
break;
default: {
const resultOutcome: never = result.outcome;
throw new Error(`Unexpected: ${resultOutcome}`);
}
}
toolCallId = result.id;
} else {
const result = await this.client.pushToolCall({
icon: tool.icon,
label: tool.getDescription(args),
locations: tool.toolLocations(args),
});
toolCallId = result.id;
}
let toolCallId: number | undefined = undefined;
try {
const toolResult: ToolResult = await tool.execute(args, abortSignal);
const invocation = tool.build(args);
const confirmationDetails =
await invocation.shouldConfirmExecute(abortSignal);
if (confirmationDetails) {
let content: acp.ToolCallContent | null = null;
if (confirmationDetails.type === 'edit') {
content = {
type: 'diff',
path: confirmationDetails.fileName,
oldText: confirmationDetails.originalContent,
newText: confirmationDetails.newContent,
};
}
const result = await this.client.requestToolCallConfirmation({
label: invocation.getDescription(),
icon: tool.icon,
content,
confirmation: toAcpToolCallConfirmation(confirmationDetails),
locations: invocation.toolLocations(),
});
await confirmationDetails.onConfirm(toToolCallOutcome(result.outcome));
switch (result.outcome) {
case 'reject':
return errorResponse(
new Error(`Tool "${fc.name}" not allowed to run by the user.`),
);
case 'cancel':
return errorResponse(
new Error(`Tool "${fc.name}" was canceled by the user.`),
);
case 'allow':
case 'alwaysAllow':
case 'alwaysAllowMcpServer':
case 'alwaysAllowTool':
break;
default: {
const resultOutcome: never = result.outcome;
throw new Error(`Unexpected: ${resultOutcome}`);
}
}
toolCallId = result.id;
} else {
const result = await this.client.pushToolCall({
icon: tool.icon,
label: invocation.getDescription(),
locations: invocation.toolLocations(),
});
toolCallId = result.id;
}
const toolResult: ToolResult = await invocation.execute(abortSignal);
const toolCallContent = toToolCallContent(toolResult);
await this.client.updateToolCall({
@@ -320,12 +317,13 @@ class GeminiAgent implements Agent {
return convertToFunctionResponse(fc.name, callId, toolResult.llmContent);
} catch (e) {
const error = e instanceof Error ? e : new Error(String(e));
await this.client.updateToolCall({
toolCallId,
status: 'error',
content: { type: 'markdown', markdown: error.message },
});
if (toolCallId) {
await this.client.updateToolCall({
toolCallId,
status: 'error',
content: { type: 'markdown', markdown: error.message },
});
}
return errorResponse(error);
}
}
@@ -408,7 +406,7 @@ class GeminiAgent implements Agent {
`Path ${pathName} not found directly, attempting glob search.`,
);
try {
const globResult = await globTool.execute(
const globResult = await globTool.buildAndExecute(
{
pattern: `**/*${pathName}*`,
path: this.config.getTargetDir(),
@@ -530,12 +528,15 @@ class GeminiAgent implements Agent {
respectGitIgnore, // Use configuration setting
};
const toolCall = await this.client.pushToolCall({
icon: readManyFilesTool.icon,
label: readManyFilesTool.getDescription(toolArgs),
});
let toolCallId: number | undefined = undefined;
try {
const result = await readManyFilesTool.execute(toolArgs, abortSignal);
const invocation = readManyFilesTool.build(toolArgs);
const toolCall = await this.client.pushToolCall({
icon: readManyFilesTool.icon,
label: invocation.getDescription(),
});
toolCallId = toolCall.id;
const result = await invocation.execute(abortSignal);
const content = toToolCallContent(result) || {
type: 'markdown',
markdown: `Successfully read: ${contentLabelsForDisplay.join(', ')}`,
@@ -578,14 +579,16 @@ class GeminiAgent implements Agent {
return processedQueryParts;
} catch (error: unknown) {
await this.client.updateToolCall({
toolCallId: toolCall.id,
status: 'error',
content: {
type: 'markdown',
markdown: `Error reading files (${contentLabelsForDisplay.join(', ')}): ${getErrorMessage(error)}`,
},
});
if (toolCallId) {
await this.client.updateToolCall({
toolCallId,
status: 'error',
content: {
type: 'markdown',
markdown: `Error reading files (${contentLabelsForDisplay.join(', ')}): ${getErrorMessage(error)}`,
},
});
}
throw error;
}
}

View File

@@ -0,0 +1,55 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import { describe, it, expect, vi } from 'vitest';
import { mcpCommand } from './mcp.js';
import { type Argv } from 'yargs';
import yargs from 'yargs';
describe('mcp command', () => {
it('should have correct command definition', () => {
expect(mcpCommand.command).toBe('mcp');
expect(mcpCommand.describe).toBe('Manage MCP servers');
expect(typeof mcpCommand.builder).toBe('function');
expect(typeof mcpCommand.handler).toBe('function');
});
it('should have exactly one option (help flag)', () => {
// Test to ensure that the global 'gemini' flags are not added to the mcp command
const yargsInstance = yargs();
const builtYargs = mcpCommand.builder(yargsInstance);
const options = builtYargs.getOptions();
// Should have exactly 1 option (help flag)
expect(Object.keys(options.key).length).toBe(1);
expect(options.key).toHaveProperty('help');
});
it('should register add, remove, and list subcommands', () => {
const mockYargs = {
command: vi.fn().mockReturnThis(),
demandCommand: vi.fn().mockReturnThis(),
version: vi.fn().mockReturnThis(),
};
mcpCommand.builder(mockYargs as unknown as Argv);
expect(mockYargs.command).toHaveBeenCalledTimes(3);
// Verify that the specific subcommands are registered
const commandCalls = mockYargs.command.mock.calls;
const commandNames = commandCalls.map((call) => call[0].command);
expect(commandNames).toContain('add <name> <commandOrUrl> [args...]');
expect(commandNames).toContain('remove <name>');
expect(commandNames).toContain('list');
expect(mockYargs.demandCommand).toHaveBeenCalledWith(
1,
'You need at least one command before continuing.',
);
});
});

View File

@@ -0,0 +1,27 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
// File for 'gemini mcp' command
import type { CommandModule, Argv } from 'yargs';
import { addCommand } from './mcp/add.js';
import { removeCommand } from './mcp/remove.js';
import { listCommand } from './mcp/list.js';
export const mcpCommand: CommandModule = {
command: 'mcp',
describe: 'Manage MCP servers',
builder: (yargs: Argv) =>
yargs
.command(addCommand)
.command(removeCommand)
.command(listCommand)
.demandCommand(1, 'You need at least one command before continuing.')
.version(false),
handler: () => {
// yargs will automatically show help if no subcommand is provided
// thanks to demandCommand(1) in the builder.
},
};

View File

@@ -0,0 +1,88 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import yargs from 'yargs';
import { addCommand } from './add.js';
import { loadSettings, SettingScope } from '../../config/settings.js';
vi.mock('fs/promises', () => ({
readFile: vi.fn(),
writeFile: vi.fn(),
}));
vi.mock('../../config/settings.js', async () => {
const actual = await vi.importActual('../../config/settings.js');
return {
...actual,
loadSettings: vi.fn(),
};
});
const mockedLoadSettings = loadSettings as vi.Mock;
describe('mcp add command', () => {
let parser: yargs.Argv;
let mockSetValue: vi.Mock;
beforeEach(() => {
vi.resetAllMocks();
const yargsInstance = yargs([]).command(addCommand);
parser = yargsInstance;
mockSetValue = vi.fn();
mockedLoadSettings.mockReturnValue({
forScope: () => ({ settings: {} }),
setValue: mockSetValue,
});
});
it('should add a stdio server to project settings', async () => {
await parser.parseAsync(
'add my-server /path/to/server arg1 arg2 -e FOO=bar',
);
expect(mockSetValue).toHaveBeenCalledWith(
SettingScope.Workspace,
'mcpServers',
{
'my-server': {
command: '/path/to/server',
args: ['arg1', 'arg2'],
env: { FOO: 'bar' },
},
},
);
});
it('should add an sse server to user settings', async () => {
await parser.parseAsync(
'add --transport sse sse-server https://example.com/sse-endpoint --scope user -H "X-API-Key: your-key"',
);
expect(mockSetValue).toHaveBeenCalledWith(SettingScope.User, 'mcpServers', {
'sse-server': {
url: 'https://example.com/sse-endpoint',
headers: { 'X-API-Key': 'your-key' },
},
});
});
it('should add an http server to project settings', async () => {
await parser.parseAsync(
'add --transport http http-server https://example.com/mcp -H "Authorization: Bearer your-token"',
);
expect(mockSetValue).toHaveBeenCalledWith(
SettingScope.Workspace,
'mcpServers',
{
'http-server': {
httpUrl: 'https://example.com/mcp',
headers: { Authorization: 'Bearer your-token' },
},
},
);
});
});

View File

@@ -0,0 +1,211 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
// File for 'gemini mcp add' command
import type { CommandModule } from 'yargs';
import { loadSettings, SettingScope } from '../../config/settings.js';
import { MCPServerConfig } from '@qwen-code/qwen-code-core';
async function addMcpServer(
name: string,
commandOrUrl: string,
args: Array<string | number> | undefined,
options: {
scope: string;
transport: string;
env: string[] | undefined;
header: string[] | undefined;
timeout?: number;
trust?: boolean;
description?: string;
includeTools?: string[];
excludeTools?: string[];
},
) {
const {
scope,
transport,
env,
header,
timeout,
trust,
description,
includeTools,
excludeTools,
} = options;
const settingsScope =
scope === 'user' ? SettingScope.User : SettingScope.Workspace;
const settings = loadSettings(process.cwd());
let newServer: Partial<MCPServerConfig> = {};
const headers = header?.reduce(
(acc, curr) => {
const [key, ...valueParts] = curr.split(':');
const value = valueParts.join(':').trim();
if (key.trim() && value) {
acc[key.trim()] = value;
}
return acc;
},
{} as Record<string, string>,
);
switch (transport) {
case 'sse':
newServer = {
url: commandOrUrl,
headers,
timeout,
trust,
description,
includeTools,
excludeTools,
};
break;
case 'http':
newServer = {
httpUrl: commandOrUrl,
headers,
timeout,
trust,
description,
includeTools,
excludeTools,
};
break;
case 'stdio':
default:
newServer = {
command: commandOrUrl,
args: args?.map(String),
env: env?.reduce(
(acc, curr) => {
const [key, value] = curr.split('=');
if (key && value) {
acc[key] = value;
}
return acc;
},
{} as Record<string, string>,
),
timeout,
trust,
description,
includeTools,
excludeTools,
};
break;
}
const existingSettings = settings.forScope(settingsScope).settings;
const mcpServers = existingSettings.mcpServers || {};
const isExistingServer = !!mcpServers[name];
if (isExistingServer) {
console.log(
`MCP server "${name}" is already configured within ${scope} settings.`,
);
}
mcpServers[name] = newServer as MCPServerConfig;
settings.setValue(settingsScope, 'mcpServers', mcpServers);
if (isExistingServer) {
console.log(`MCP server "${name}" updated in ${scope} settings.`);
} else {
console.log(
`MCP server "${name}" added to ${scope} settings. (${transport})`,
);
}
}
export const addCommand: CommandModule = {
command: 'add <name> <commandOrUrl> [args...]',
describe: 'Add a server',
builder: (yargs) =>
yargs
.usage('Usage: gemini mcp add [options] <name> <commandOrUrl> [args...]')
.positional('name', {
describe: 'Name of the server',
type: 'string',
demandOption: true,
})
.positional('commandOrUrl', {
describe: 'Command (stdio) or URL (sse, http)',
type: 'string',
demandOption: true,
})
.option('scope', {
alias: 's',
describe: 'Configuration scope (user or project)',
type: 'string',
default: 'project',
choices: ['user', 'project'],
})
.option('transport', {
alias: 't',
describe: 'Transport type (stdio, sse, http)',
type: 'string',
default: 'stdio',
choices: ['stdio', 'sse', 'http'],
})
.option('env', {
alias: 'e',
describe: 'Set environment variables (e.g. -e KEY=value)',
type: 'array',
string: true,
})
.option('header', {
alias: 'H',
describe:
'Set HTTP headers for SSE and HTTP transports (e.g. -H "X-Api-Key: abc123" -H "Authorization: Bearer abc123")',
type: 'array',
string: true,
})
.option('timeout', {
describe: 'Set connection timeout in milliseconds',
type: 'number',
})
.option('trust', {
describe:
'Trust the server (bypass all tool call confirmation prompts)',
type: 'boolean',
})
.option('description', {
describe: 'Set the description for the server',
type: 'string',
})
.option('include-tools', {
describe: 'A comma-separated list of tools to include',
type: 'array',
string: true,
})
.option('exclude-tools', {
describe: 'A comma-separated list of tools to exclude',
type: 'array',
string: true,
}),
handler: async (argv) => {
await addMcpServer(
argv.name as string,
argv.commandOrUrl as string,
argv.args as Array<string | number>,
{
scope: argv.scope as string,
transport: argv.transport as string,
env: argv.env as string[],
header: argv.header as string[],
timeout: argv.timeout as number | undefined,
trust: argv.trust as boolean | undefined,
description: argv.description as string | undefined,
includeTools: argv.includeTools as string[] | undefined,
excludeTools: argv.excludeTools as string[] | undefined,
},
);
},
};

View File

@@ -0,0 +1,154 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import { vi, describe, it, expect, beforeEach, afterEach } from 'vitest';
import { listMcpServers } from './list.js';
import { loadSettings } from '../../config/settings.js';
import { loadExtensions } from '../../config/extension.js';
import { createTransport } from '@qwen-code/qwen-code-core';
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
vi.mock('../../config/settings.js');
vi.mock('../../config/extension.js');
vi.mock('@qwen-code/qwen-code-core');
vi.mock('@modelcontextprotocol/sdk/client/index.js');
const mockedLoadSettings = loadSettings as vi.Mock;
const mockedLoadExtensions = loadExtensions as vi.Mock;
const mockedCreateTransport = createTransport as vi.Mock;
const MockedClient = Client as vi.Mock;
interface MockClient {
connect: vi.Mock;
ping: vi.Mock;
close: vi.Mock;
}
interface MockTransport {
close: vi.Mock;
}
describe('mcp list command', () => {
let consoleSpy: vi.SpyInstance;
let mockClient: MockClient;
let mockTransport: MockTransport;
beforeEach(() => {
vi.resetAllMocks();
consoleSpy = vi.spyOn(console, 'log').mockImplementation(() => {});
mockTransport = { close: vi.fn() };
mockClient = {
connect: vi.fn(),
ping: vi.fn(),
close: vi.fn(),
};
MockedClient.mockImplementation(() => mockClient);
mockedCreateTransport.mockResolvedValue(mockTransport);
mockedLoadExtensions.mockReturnValue([]);
});
afterEach(() => {
consoleSpy.mockRestore();
});
it('should display message when no servers configured', async () => {
mockedLoadSettings.mockReturnValue({ merged: { mcpServers: {} } });
await listMcpServers();
expect(consoleSpy).toHaveBeenCalledWith('No MCP servers configured.');
});
it('should display different server types with connected status', async () => {
mockedLoadSettings.mockReturnValue({
merged: {
mcpServers: {
'stdio-server': { command: '/path/to/server', args: ['arg1'] },
'sse-server': { url: 'https://example.com/sse' },
'http-server': { httpUrl: 'https://example.com/http' },
},
},
});
mockClient.connect.mockResolvedValue(undefined);
mockClient.ping.mockResolvedValue(undefined);
await listMcpServers();
expect(consoleSpy).toHaveBeenCalledWith('Configured MCP servers:\n');
expect(consoleSpy).toHaveBeenCalledWith(
expect.stringContaining(
'stdio-server: /path/to/server arg1 (stdio) - Connected',
),
);
expect(consoleSpy).toHaveBeenCalledWith(
expect.stringContaining(
'sse-server: https://example.com/sse (sse) - Connected',
),
);
expect(consoleSpy).toHaveBeenCalledWith(
expect.stringContaining(
'http-server: https://example.com/http (http) - Connected',
),
);
});
it('should display disconnected status when connection fails', async () => {
mockedLoadSettings.mockReturnValue({
merged: {
mcpServers: {
'test-server': { command: '/test/server' },
},
},
});
mockClient.connect.mockRejectedValue(new Error('Connection failed'));
await listMcpServers();
expect(consoleSpy).toHaveBeenCalledWith(
expect.stringContaining(
'test-server: /test/server (stdio) - Disconnected',
),
);
});
it('should merge extension servers with config servers', async () => {
mockedLoadSettings.mockReturnValue({
merged: {
mcpServers: { 'config-server': { command: '/config/server' } },
},
});
mockedLoadExtensions.mockReturnValue([
{
config: {
name: 'test-extension',
mcpServers: { 'extension-server': { command: '/ext/server' } },
},
},
]);
mockClient.connect.mockResolvedValue(undefined);
mockClient.ping.mockResolvedValue(undefined);
await listMcpServers();
expect(consoleSpy).toHaveBeenCalledWith(
expect.stringContaining(
'config-server: /config/server (stdio) - Connected',
),
);
expect(consoleSpy).toHaveBeenCalledWith(
expect.stringContaining(
'extension-server: /ext/server (stdio) - Connected',
),
);
});
});

View File

@@ -0,0 +1,139 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
// File for 'gemini mcp list' command
import type { CommandModule } from 'yargs';
import { loadSettings } from '../../config/settings.js';
import {
MCPServerConfig,
MCPServerStatus,
createTransport,
} from '@qwen-code/qwen-code-core';
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { loadExtensions } from '../../config/extension.js';
const COLOR_GREEN = '\u001b[32m';
const COLOR_YELLOW = '\u001b[33m';
const COLOR_RED = '\u001b[31m';
const RESET_COLOR = '\u001b[0m';
async function getMcpServersFromConfig(): Promise<
Record<string, MCPServerConfig>
> {
const settings = loadSettings(process.cwd());
const extensions = loadExtensions(process.cwd());
const mcpServers = { ...(settings.merged.mcpServers || {}) };
for (const extension of extensions) {
Object.entries(extension.config.mcpServers || {}).forEach(
([key, server]) => {
if (mcpServers[key]) {
return;
}
mcpServers[key] = {
...server,
extensionName: extension.config.name,
};
},
);
}
return mcpServers;
}
async function testMCPConnection(
serverName: string,
config: MCPServerConfig,
): Promise<MCPServerStatus> {
const client = new Client({
name: 'mcp-test-client',
version: '0.0.1',
});
let transport;
try {
// Use the same transport creation logic as core
transport = await createTransport(serverName, config, false);
} catch (_error) {
await client.close();
return MCPServerStatus.DISCONNECTED;
}
try {
// Attempt actual MCP connection with short timeout
await client.connect(transport, { timeout: 5000 }); // 5s timeout
// Test basic MCP protocol by pinging the server
await client.ping();
await client.close();
return MCPServerStatus.CONNECTED;
} catch (_error) {
await transport.close();
return MCPServerStatus.DISCONNECTED;
}
}
async function getServerStatus(
serverName: string,
server: MCPServerConfig,
): Promise<MCPServerStatus> {
// Test all server types by attempting actual connection
return await testMCPConnection(serverName, server);
}
export async function listMcpServers(): Promise<void> {
const mcpServers = await getMcpServersFromConfig();
const serverNames = Object.keys(mcpServers);
if (serverNames.length === 0) {
console.log('No MCP servers configured.');
return;
}
console.log('Configured MCP servers:\n');
for (const serverName of serverNames) {
const server = mcpServers[serverName];
const status = await getServerStatus(serverName, server);
let statusIndicator = '';
let statusText = '';
switch (status) {
case MCPServerStatus.CONNECTED:
statusIndicator = COLOR_GREEN + '✓' + RESET_COLOR;
statusText = 'Connected';
break;
case MCPServerStatus.CONNECTING:
statusIndicator = COLOR_YELLOW + '…' + RESET_COLOR;
statusText = 'Connecting';
break;
case MCPServerStatus.DISCONNECTED:
default:
statusIndicator = COLOR_RED + '✗' + RESET_COLOR;
statusText = 'Disconnected';
break;
}
let serverInfo = `${serverName}: `;
if (server.httpUrl) {
serverInfo += `${server.httpUrl} (http)`;
} else if (server.url) {
serverInfo += `${server.url} (sse)`;
} else if (server.command) {
serverInfo += `${server.command} ${server.args?.join(' ') || ''} (stdio)`;
}
console.log(`${statusIndicator} ${serverInfo} - ${statusText}`);
}
}
export const listCommand: CommandModule = {
command: 'list',
describe: 'List all configured MCP servers',
handler: async () => {
await listMcpServers();
},
};

View File

@@ -0,0 +1,69 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import { vi, describe, it, expect, beforeEach } from 'vitest';
import yargs from 'yargs';
import { loadSettings, SettingScope } from '../../config/settings.js';
import { removeCommand } from './remove.js';
vi.mock('fs/promises', () => ({
readFile: vi.fn(),
writeFile: vi.fn(),
}));
vi.mock('../../config/settings.js', async () => {
const actual = await vi.importActual('../../config/settings.js');
return {
...actual,
loadSettings: vi.fn(),
};
});
const mockedLoadSettings = loadSettings as vi.Mock;
describe('mcp remove command', () => {
let parser: yargs.Argv;
let mockSetValue: vi.Mock;
let mockSettings: Record<string, unknown>;
beforeEach(() => {
vi.resetAllMocks();
const yargsInstance = yargs([]).command(removeCommand);
parser = yargsInstance;
mockSetValue = vi.fn();
mockSettings = {
mcpServers: {
'test-server': {
command: 'echo "hello"',
},
},
};
mockedLoadSettings.mockReturnValue({
forScope: () => ({ settings: mockSettings }),
setValue: mockSetValue,
});
});
it('should remove a server from project settings', async () => {
await parser.parseAsync('remove test-server');
expect(mockSetValue).toHaveBeenCalledWith(
SettingScope.Workspace,
'mcpServers',
{},
);
});
it('should show a message if server not found', async () => {
const consoleSpy = vi.spyOn(console, 'log').mockImplementation(() => {});
await parser.parseAsync('remove non-existent-server');
expect(mockSetValue).not.toHaveBeenCalled();
expect(consoleSpy).toHaveBeenCalledWith(
'Server "non-existent-server" not found in project settings.',
);
});
});

View File

@@ -0,0 +1,60 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
// File for 'gemini mcp remove' command
import type { CommandModule } from 'yargs';
import { loadSettings, SettingScope } from '../../config/settings.js';
async function removeMcpServer(
name: string,
options: {
scope: string;
},
) {
const { scope } = options;
const settingsScope =
scope === 'user' ? SettingScope.User : SettingScope.Workspace;
const settings = loadSettings(process.cwd());
const existingSettings = settings.forScope(settingsScope).settings;
const mcpServers = existingSettings.mcpServers || {};
if (!mcpServers[name]) {
console.log(`Server "${name}" not found in ${scope} settings.`);
return;
}
delete mcpServers[name];
settings.setValue(settingsScope, 'mcpServers', mcpServers);
console.log(`Server "${name}" removed from ${scope} settings.`);
}
export const removeCommand: CommandModule = {
command: 'remove <name>',
describe: 'Remove a server',
builder: (yargs) =>
yargs
.usage('Usage: gemini mcp remove [options] <name>')
.positional('name', {
describe: 'Name of the server',
type: 'string',
demandOption: true,
})
.option('scope', {
alias: 's',
describe: 'Configuration scope (user or project)',
type: 'string',
default: 'project',
choices: ['user', 'project'],
}),
handler: async (argv) => {
await removeMcpServer(argv.name as string, {
scope: argv.scope as string,
});
},
};

View File

@@ -45,6 +45,12 @@ export const validateAuthMethod = (authMethod: string): string | null => {
return null;
}
if (authMethod === AuthType.QWEN_OAUTH) {
// Qwen OAuth doesn't require any environment variables for basic setup
// The OAuth flow will handle authentication
return null;
}
return 'Invalid auth method selected.';
};

View File

@@ -37,7 +37,7 @@ describe('Configuration Integration Tests', () => {
let originalEnv: NodeJS.ProcessEnv;
beforeEach(() => {
tempDir = fs.mkdtempSync(path.join(tmpdir(), 'gemini-cli-test-'));
tempDir = fs.mkdtempSync(path.join(tmpdir(), 'qwen-code-test-'));
originalEnv = { ...process.env };
process.env.GEMINI_API_KEY = 'test-api-key';
vi.clearAllMocks();

View File

@@ -6,7 +6,10 @@
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
import * as os from 'os';
import { loadCliConfig, parseArguments, CliArgs } from './config.js';
import * as fs from 'fs';
import * as path from 'path';
import { ShellTool, EditTool, WriteFileTool } from '@qwen-code/qwen-code-core';
import { loadCliConfig, parseArguments } from './config.js';
import { Settings } from './settings.js';
import { Extension } from './extension.js';
import * as ServerConfig from '@qwen-code/qwen-code-core';
@@ -35,9 +38,16 @@ vi.mock('@qwen-code/qwen-code-core', async () => {
);
return {
...actualServer,
IdeClient: {
getInstance: vi.fn().mockReturnValue({
getConnectionStatus: vi.fn(),
initialize: vi.fn(),
shutdown: vi.fn(),
}),
},
loadEnvironment: vi.fn(),
loadServerHierarchicalMemory: vi.fn(
(cwd, debug, fileService, extensionPaths, _maxDirs) =>
(cwd, dirs, debug, fileService, extensionPaths, _maxDirs) =>
Promise.resolve({
memoryContent: extensionPaths?.join(',') || '',
fileCount: extensionPaths?.length || 0,
@@ -492,6 +502,7 @@ describe('Hierarchical Memory Loading (config.ts) - Placeholder Suite', () => {
await loadCliConfig(settings, extensions, 'session-id', argv);
expect(ServerConfig.loadServerHierarchicalMemory).toHaveBeenCalledWith(
expect.any(String),
[],
false,
expect.any(Object),
[
@@ -499,6 +510,7 @@ describe('Hierarchical Memory Loading (config.ts) - Placeholder Suite', () => {
'/path/to/ext3/context1.md',
'/path/to/ext3/context2.md',
],
'tree',
{
respectGitIgnore: false,
respectGeminiIgnore: true,
@@ -624,6 +636,17 @@ describe('loadCliConfig systemPromptMappings', () => {
});
describe('mergeExcludeTools', () => {
const defaultExcludes = [ShellTool.Name, EditTool.Name, WriteFileTool.Name];
const originalIsTTY = process.stdin.isTTY;
beforeEach(() => {
process.stdin.isTTY = true;
});
afterEach(() => {
process.stdin.isTTY = originalIsTTY;
});
it('should merge excludeTools from settings and extensions', async () => {
const settings: Settings = { excludeTools: ['tool1', 'tool2'] };
const extensions: Extension[] = [
@@ -718,7 +741,8 @@ describe('mergeExcludeTools', () => {
expect(config.getExcludeTools()).toHaveLength(4);
});
it('should return an empty array when no excludeTools are specified', async () => {
it('should return an empty array when no excludeTools are specified and it is interactive', async () => {
process.stdin.isTTY = true;
const settings: Settings = {};
const extensions: Extension[] = [];
process.argv = ['node', 'script.js'];
@@ -732,6 +756,21 @@ describe('mergeExcludeTools', () => {
expect(config.getExcludeTools()).toEqual([]);
});
it('should return default excludes when no excludeTools are specified and it is not interactive', async () => {
process.stdin.isTTY = false;
const settings: Settings = {};
const extensions: Extension[] = [];
process.argv = ['node', 'script.js', '-p', 'test'];
const argv = await parseArguments();
const config = await loadCliConfig(
settings,
extensions,
'test-session',
argv,
);
expect(config.getExcludeTools()).toEqual(defaultExcludes);
});
it('should handle settings with excludeTools but no extensions', async () => {
process.argv = ['node', 'script.js'];
const argv = await parseArguments();
@@ -983,7 +1022,69 @@ describe('loadCliConfig extensions', () => {
});
});
describe('loadCliConfig ideMode', () => {
describe('loadCliConfig model selection', () => {
it('selects a model from settings.json if provided', async () => {
process.argv = ['node', 'script.js'];
const argv = await parseArguments();
const config = await loadCliConfig(
{
model: 'qwen3-coder-plus',
},
[],
'test-session',
argv,
);
expect(config.getModel()).toBe('qwen3-coder-plus');
});
it('uses the default gemini model if nothing is set', async () => {
process.argv = ['node', 'script.js']; // No model set.
const argv = await parseArguments();
const config = await loadCliConfig(
{
// No model set.
},
[],
'test-session',
argv,
);
expect(config.getModel()).toBe('qwen3-coder-plus');
});
it('always prefers model from argvs', async () => {
process.argv = ['node', 'script.js', '--model', 'qwen3-coder-plus'];
const argv = await parseArguments();
const config = await loadCliConfig(
{
model: 'qwen3-coder-plus',
},
[],
'test-session',
argv,
);
expect(config.getModel()).toBe('qwen3-coder-plus');
});
it('selects the model from argvs if provided', async () => {
process.argv = ['node', 'script.js', '--model', 'qwen3-coder-plus'];
const argv = await parseArguments();
const config = await loadCliConfig(
{
// No model provided via settings.
},
[],
'test-session',
argv,
);
expect(config.getModel()).toBe('qwen3-coder-plus');
});
});
describe('loadCliConfig ideModeFeature', () => {
const originalArgv = process.argv;
const originalEnv = { ...process.env };
@@ -991,10 +1092,8 @@ describe('loadCliConfig ideMode', () => {
vi.resetAllMocks();
vi.mocked(os.homedir).mockReturnValue('/mock/home/user');
process.env.GEMINI_API_KEY = 'test-api-key';
// Explicitly delete TERM_PROGRAM and SANDBOX before each test
delete process.env.TERM_PROGRAM;
delete process.env.SANDBOX;
delete process.env.GEMINI_CLI_IDE_SERVER_PORT;
delete process.env.QWEN_CODE_IDE_SERVER_PORT;
});
afterEach(() => {
@@ -1008,81 +1107,324 @@ describe('loadCliConfig ideMode', () => {
const settings: Settings = {};
const argv = await parseArguments();
const config = await loadCliConfig(settings, [], 'test-session', argv);
expect(config.getIdeMode()).toBe(false);
});
it('should be false if --ide-mode is true but TERM_PROGRAM is not vscode', async () => {
process.argv = ['node', 'script.js', '--ide-mode'];
const settings: Settings = {};
const argv = await parseArguments();
const config = await loadCliConfig(settings, [], 'test-session', argv);
expect(config.getIdeMode()).toBe(false);
});
it('should be false if settings.ideMode is true but TERM_PROGRAM is not vscode', async () => {
process.argv = ['node', 'script.js'];
const argv = await parseArguments();
const settings: Settings = { ideMode: true };
const config = await loadCliConfig(settings, [], 'test-session', argv);
expect(config.getIdeMode()).toBe(false);
});
it('should be true when --ide-mode is set and TERM_PROGRAM is vscode', async () => {
process.argv = ['node', 'script.js', '--ide-mode'];
const argv = await parseArguments();
process.env.TERM_PROGRAM = 'vscode';
process.env.GEMINI_CLI_IDE_SERVER_PORT = '3000';
const settings: Settings = {};
const config = await loadCliConfig(settings, [], 'test-session', argv);
expect(config.getIdeMode()).toBe(true);
});
it('should be true when settings.ideMode is true and TERM_PROGRAM is vscode', async () => {
process.argv = ['node', 'script.js'];
const argv = await parseArguments();
process.env.TERM_PROGRAM = 'vscode';
process.env.GEMINI_CLI_IDE_SERVER_PORT = '3000';
const settings: Settings = { ideMode: true };
const config = await loadCliConfig(settings, [], 'test-session', argv);
expect(config.getIdeMode()).toBe(true);
});
it('should prioritize --ide-mode (true) over settings (false) when TERM_PROGRAM is vscode', async () => {
process.argv = ['node', 'script.js', '--ide-mode'];
const argv = await parseArguments();
process.env.TERM_PROGRAM = 'vscode';
process.env.GEMINI_CLI_IDE_SERVER_PORT = '3000';
const settings: Settings = { ideMode: false };
const config = await loadCliConfig(settings, [], 'test-session', argv);
expect(config.getIdeMode()).toBe(true);
});
it('should prioritize --no-ide-mode (false) over settings (true) even when TERM_PROGRAM is vscode', async () => {
process.argv = ['node', 'script.js', '--no-ide-mode'];
const argv = await parseArguments();
process.env.TERM_PROGRAM = 'vscode';
const settings: Settings = { ideMode: true };
const config = await loadCliConfig(settings, [], 'test-session', argv);
expect(config.getIdeMode()).toBe(false);
});
it('should be false when --ide-mode is true, TERM_PROGRAM is vscode, but SANDBOX is set', async () => {
process.argv = ['node', 'script.js', '--ide-mode'];
const argv = await parseArguments();
process.env.TERM_PROGRAM = 'vscode';
process.env.SANDBOX = 'true';
const settings: Settings = {};
const config = await loadCliConfig(settings, [], 'test-session', argv);
expect(config.getIdeMode()).toBe(false);
});
it('should be false when settings.ideMode is true, TERM_PROGRAM is vscode, but SANDBOX is set', async () => {
process.argv = ['node', 'script.js'];
const argv = await parseArguments();
process.env.TERM_PROGRAM = 'vscode';
process.env.SANDBOX = 'true';
const settings: Settings = { ideMode: true };
const config = await loadCliConfig(settings, [], 'test-session', argv);
expect(config.getIdeMode()).toBe(false);
expect(config.getIdeModeFeature()).toBe(false);
});
});
describe('loadCliConfig folderTrustFeature', () => {
const originalArgv = process.argv;
const originalEnv = { ...process.env };
beforeEach(() => {
vi.resetAllMocks();
vi.mocked(os.homedir).mockReturnValue('/mock/home/user');
process.env.GEMINI_API_KEY = 'test-api-key';
});
afterEach(() => {
process.argv = originalArgv;
process.env = originalEnv;
vi.restoreAllMocks();
});
it('should be false by default', async () => {
process.argv = ['node', 'script.js'];
const settings: Settings = {};
const argv = await parseArguments();
const config = await loadCliConfig(settings, [], 'test-session', argv);
expect(config.getFolderTrustFeature()).toBe(false);
});
it('should be true when settings.folderTrustFeature is true', async () => {
process.argv = ['node', 'script.js'];
const argv = await parseArguments();
const settings: Settings = { folderTrustFeature: true };
const config = await loadCliConfig(settings, [], 'test-session', argv);
expect(config.getFolderTrustFeature()).toBe(true);
});
});
describe('loadCliConfig folderTrust', () => {
const originalArgv = process.argv;
const originalEnv = { ...process.env };
beforeEach(() => {
vi.resetAllMocks();
vi.mocked(os.homedir).mockReturnValue('/mock/home/user');
process.env.GEMINI_API_KEY = 'test-api-key';
});
afterEach(() => {
process.argv = originalArgv;
process.env = originalEnv;
vi.restoreAllMocks();
});
it('should be false if folderTrustFeature is false and folderTrust is false', async () => {
process.argv = ['node', 'script.js'];
const settings: Settings = {
folderTrustFeature: false,
folderTrust: false,
};
const argv = await parseArguments();
const config = await loadCliConfig(settings, [], 'test-session', argv);
expect(config.getFolderTrust()).toBe(false);
});
it('should be false if folderTrustFeature is true and folderTrust is false', async () => {
process.argv = ['node', 'script.js'];
const argv = await parseArguments();
const settings: Settings = { folderTrustFeature: true, folderTrust: false };
const config = await loadCliConfig(settings, [], 'test-session', argv);
expect(config.getFolderTrust()).toBe(false);
});
it('should be false if folderTrustFeature is false and folderTrust is true', async () => {
process.argv = ['node', 'script.js'];
const argv = await parseArguments();
const settings: Settings = { folderTrustFeature: false, folderTrust: true };
const config = await loadCliConfig(settings, [], 'test-session', argv);
expect(config.getFolderTrust()).toBe(false);
});
it('should be true when folderTrustFeature is true and folderTrust is true', async () => {
process.argv = ['node', 'script.js'];
const argv = await parseArguments();
const settings: Settings = { folderTrustFeature: true, folderTrust: true };
const config = await loadCliConfig(settings, [], 'test-session', argv);
expect(config.getFolderTrust()).toBe(true);
});
});
vi.mock('fs', async () => {
const actualFs = await vi.importActual<typeof fs>('fs');
const MOCK_CWD1 = process.cwd();
const MOCK_CWD2 = path.resolve(path.sep, 'home', 'user', 'project');
const mockPaths = new Set([
MOCK_CWD1,
MOCK_CWD2,
path.resolve(path.sep, 'cli', 'path1'),
path.resolve(path.sep, 'settings', 'path1'),
path.join(os.homedir(), 'settings', 'path2'),
path.join(MOCK_CWD2, 'cli', 'path2'),
path.join(MOCK_CWD2, 'settings', 'path3'),
]);
return {
...actualFs,
existsSync: vi.fn((p) => mockPaths.has(p.toString())),
statSync: vi.fn((p) => {
if (mockPaths.has(p.toString())) {
return { isDirectory: () => true };
}
// Fallback for other paths if needed, though the test should be specific.
return actualFs.statSync(p);
}),
realpathSync: vi.fn((p) => p),
};
});
describe('loadCliConfig with includeDirectories', () => {
const originalArgv = process.argv;
const originalEnv = { ...process.env };
beforeEach(() => {
vi.resetAllMocks();
vi.mocked(os.homedir).mockReturnValue('/mock/home/user');
process.env.GEMINI_API_KEY = 'test-api-key';
vi.spyOn(process, 'cwd').mockReturnValue(
path.resolve(path.sep, 'home', 'user', 'project'),
);
});
afterEach(() => {
process.argv = originalArgv;
process.env = originalEnv;
vi.restoreAllMocks();
});
it('should combine and resolve paths from settings and CLI arguments', async () => {
const mockCwd = path.resolve(path.sep, 'home', 'user', 'project');
process.argv = [
'node',
'script.js',
'--include-directories',
`${path.resolve(path.sep, 'cli', 'path1')},${path.join(mockCwd, 'cli', 'path2')}`,
];
const argv = await parseArguments();
const settings: Settings = {
includeDirectories: [
path.resolve(path.sep, 'settings', 'path1'),
path.join(os.homedir(), 'settings', 'path2'),
path.join(mockCwd, 'settings', 'path3'),
],
};
const config = await loadCliConfig(settings, [], 'test-session', argv);
const expected = [
mockCwd,
path.resolve(path.sep, 'cli', 'path1'),
path.join(mockCwd, 'cli', 'path2'),
path.resolve(path.sep, 'settings', 'path1'),
path.join(os.homedir(), 'settings', 'path2'),
path.join(mockCwd, 'settings', 'path3'),
];
expect(config.getWorkspaceContext().getDirectories()).toEqual(
expect.arrayContaining(expected),
);
expect(config.getWorkspaceContext().getDirectories()).toHaveLength(
expected.length,
);
});
});
describe('loadCliConfig chatCompression', () => {
const originalArgv = process.argv;
const originalEnv = { ...process.env };
beforeEach(() => {
vi.resetAllMocks();
vi.mocked(os.homedir).mockReturnValue('/mock/home/user');
process.env.GEMINI_API_KEY = 'test-api-key';
});
afterEach(() => {
process.argv = originalArgv;
process.env = originalEnv;
vi.restoreAllMocks();
});
it('should pass chatCompression settings to the core config', async () => {
process.argv = ['node', 'script.js'];
const argv = await parseArguments();
const settings: Settings = {
chatCompression: {
contextPercentageThreshold: 0.5,
},
};
const config = await loadCliConfig(settings, [], 'test-session', argv);
expect(config.getChatCompression()).toEqual({
contextPercentageThreshold: 0.5,
});
});
it('should have undefined chatCompression if not in settings', async () => {
process.argv = ['node', 'script.js'];
const argv = await parseArguments();
const settings: Settings = {};
const config = await loadCliConfig(settings, [], 'test-session', argv);
expect(config.getChatCompression()).toBeUndefined();
});
});
describe('loadCliConfig tool exclusions', () => {
const originalArgv = process.argv;
const originalEnv = { ...process.env };
const originalIsTTY = process.stdin.isTTY;
beforeEach(() => {
vi.resetAllMocks();
vi.mocked(os.homedir).mockReturnValue('/mock/home/user');
process.env.GEMINI_API_KEY = 'test-api-key';
process.stdin.isTTY = true;
});
afterEach(() => {
process.argv = originalArgv;
process.env = originalEnv;
process.stdin.isTTY = originalIsTTY;
vi.restoreAllMocks();
});
it('should not exclude interactive tools in interactive mode without YOLO', async () => {
process.stdin.isTTY = true;
process.argv = ['node', 'script.js'];
const argv = await parseArguments();
const config = await loadCliConfig({}, [], 'test-session', argv);
expect(config.getExcludeTools()).not.toContain('run_shell_command');
expect(config.getExcludeTools()).not.toContain('replace');
expect(config.getExcludeTools()).not.toContain('write_file');
});
it('should not exclude interactive tools in interactive mode with YOLO', async () => {
process.stdin.isTTY = true;
process.argv = ['node', 'script.js', '--yolo'];
const argv = await parseArguments();
const config = await loadCliConfig({}, [], 'test-session', argv);
expect(config.getExcludeTools()).not.toContain('run_shell_command');
expect(config.getExcludeTools()).not.toContain('replace');
expect(config.getExcludeTools()).not.toContain('write_file');
});
it('should exclude interactive tools in non-interactive mode without YOLO', async () => {
process.stdin.isTTY = false;
process.argv = ['node', 'script.js', '-p', 'test'];
const argv = await parseArguments();
const config = await loadCliConfig({}, [], 'test-session', argv);
expect(config.getExcludeTools()).toContain('run_shell_command');
expect(config.getExcludeTools()).toContain('replace');
expect(config.getExcludeTools()).toContain('write_file');
});
it('should not exclude interactive tools in non-interactive mode with YOLO', async () => {
process.stdin.isTTY = false;
process.argv = ['node', 'script.js', '-p', 'test', '--yolo'];
const argv = await parseArguments();
const config = await loadCliConfig({}, [], 'test-session', argv);
expect(config.getExcludeTools()).not.toContain('run_shell_command');
expect(config.getExcludeTools()).not.toContain('replace');
expect(config.getExcludeTools()).not.toContain('write_file');
});
});
describe('loadCliConfig interactive', () => {
const originalArgv = process.argv;
const originalEnv = { ...process.env };
const originalIsTTY = process.stdin.isTTY;
beforeEach(() => {
vi.resetAllMocks();
vi.mocked(os.homedir).mockReturnValue('/mock/home/user');
process.env.GEMINI_API_KEY = 'test-api-key';
process.stdin.isTTY = true;
});
afterEach(() => {
process.argv = originalArgv;
process.env = originalEnv;
process.stdin.isTTY = originalIsTTY;
vi.restoreAllMocks();
});
it('should be interactive if isTTY and no prompt', async () => {
process.stdin.isTTY = true;
process.argv = ['node', 'script.js'];
const argv = await parseArguments();
const config = await loadCliConfig({}, [], 'test-session', argv);
expect(config.isInteractive()).toBe(true);
});
it('should be interactive if prompt-interactive is set', async () => {
process.stdin.isTTY = false;
process.argv = ['node', 'script.js', '--prompt-interactive', 'test'];
const argv = await parseArguments();
const config = await loadCliConfig({}, [], 'test-session', argv);
expect(config.isInteractive()).toBe(true);
});
it('should not be interactive if not isTTY and no prompt', async () => {
process.stdin.isTTY = false;
process.argv = ['node', 'script.js'];
const argv = await parseArguments();
const config = await loadCliConfig({}, [], 'test-session', argv);
expect(config.isInteractive()).toBe(false);
});
it('should not be interactive if prompt is set', async () => {
process.stdin.isTTY = true;
process.argv = ['node', 'script.js', '--prompt', 'test'];
const argv = await parseArguments();
const config = await loadCliConfig({}, [], 'test-session', argv);
expect(config.isInteractive()).toBe(false);
});
});

View File

@@ -4,9 +4,13 @@
* SPDX-License-Identifier: Apache-2.0
*/
import * as fs from 'fs';
import * as path from 'path';
import { homedir } from 'node:os';
import yargs from 'yargs/yargs';
import { hideBin } from 'yargs/helpers';
import process from 'node:process';
import { mcpCommand } from '../commands/mcp.js';
import {
Config,
loadServerHierarchicalMemory,
@@ -19,13 +23,18 @@ import {
FileDiscoveryService,
TelemetryTarget,
FileFilteringOptions,
IdeClient,
ShellTool,
EditTool,
WriteFileTool,
MCPServerConfig,
ConfigParameters,
} from '@qwen-code/qwen-code-core';
import { Settings } from './settings.js';
import { Extension, annotateActiveExtensions } from './extension.js';
import { getCliVersion } from '../utils/version.js';
import { loadSandboxConfig } from './sandboxConfig.js';
import { resolvePath } from '../utils/resolvePath.js';
// Simple console logger for now - replace with actual logger if available
const logger = {
@@ -59,178 +68,209 @@ export interface CliArgs {
experimentalAcp: boolean | undefined;
extensions: string[] | undefined;
listExtensions: boolean | undefined;
ideMode: boolean | undefined;
ideModeFeature: boolean | undefined;
openaiLogging: boolean | undefined;
openaiApiKey: string | undefined;
openaiBaseUrl: string | undefined;
proxy: string | undefined;
includeDirectories: string[] | undefined;
tavilyApiKey: string | undefined;
}
export async function parseArguments(): Promise<CliArgs> {
const yargsInstance = yargs(hideBin(process.argv))
.scriptName('qwen')
.usage(
'$0 [options]',
'Qwen Code - Launch an interactive CLI, use -p/--prompt for non-interactive mode',
'Usage: qwen [options] [command]\n\nQwen Code - Launch an interactive CLI, use -p/--prompt for non-interactive mode',
)
.option('model', {
alias: 'm',
type: 'string',
description: `Model`,
default: process.env.GEMINI_MODEL || DEFAULT_GEMINI_MODEL,
})
.option('prompt', {
alias: 'p',
type: 'string',
description: 'Prompt. Appended to input on stdin (if any).',
})
.option('prompt-interactive', {
alias: 'i',
type: 'string',
description:
'Execute the provided prompt and continue in interactive mode',
})
.option('sandbox', {
alias: 's',
type: 'boolean',
description: 'Run in sandbox?',
})
.option('sandbox-image', {
type: 'string',
description: 'Sandbox image URI.',
})
.option('debug', {
alias: 'd',
type: 'boolean',
description: 'Run in debug mode?',
default: false,
})
.option('all-files', {
alias: ['a'],
type: 'boolean',
description: 'Include ALL files in context?',
default: false,
})
.option('all_files', {
type: 'boolean',
description: 'Include ALL files in context?',
default: false,
})
.deprecateOption(
'all_files',
'Use --all-files instead. We will be removing --all_files in the coming weeks.',
.command('$0', 'Launch Qwen Code', (yargsInstance) =>
yargsInstance
.option('model', {
alias: 'm',
type: 'string',
description: `Model`,
default: process.env.GEMINI_MODEL,
})
.option('prompt', {
alias: 'p',
type: 'string',
description: 'Prompt. Appended to input on stdin (if any).',
})
.option('prompt-interactive', {
alias: 'i',
type: 'string',
description:
'Execute the provided prompt and continue in interactive mode',
})
.option('sandbox', {
alias: 's',
type: 'boolean',
description: 'Run in sandbox?',
})
.option('sandbox-image', {
type: 'string',
description: 'Sandbox image URI.',
})
.option('debug', {
alias: 'd',
type: 'boolean',
description: 'Run in debug mode?',
default: false,
})
.option('all-files', {
alias: ['a'],
type: 'boolean',
description: 'Include ALL files in context?',
default: false,
})
.option('all_files', {
type: 'boolean',
description: 'Include ALL files in context?',
default: false,
})
.deprecateOption(
'all_files',
'Use --all-files instead. We will be removing --all_files in the coming weeks.',
)
.option('show-memory-usage', {
type: 'boolean',
description: 'Show memory usage in status bar',
default: false,
})
.option('show_memory_usage', {
type: 'boolean',
description: 'Show memory usage in status bar',
default: false,
})
.deprecateOption(
'show_memory_usage',
'Use --show-memory-usage instead. We will be removing --show_memory_usage in the coming weeks.',
)
.option('yolo', {
alias: 'y',
type: 'boolean',
description:
'Automatically accept all actions (aka YOLO mode, see https://www.youtube.com/watch?v=xvFZjo5PgG0 for more details)?',
default: false,
})
.option('telemetry', {
type: 'boolean',
description:
'Enable telemetry? This flag specifically controls if telemetry is sent. Other --telemetry-* flags set specific values but do not enable telemetry on their own.',
})
.option('telemetry-target', {
type: 'string',
choices: ['local', 'gcp'],
description:
'Set the telemetry target (local or gcp). Overrides settings files.',
})
.option('telemetry-otlp-endpoint', {
type: 'string',
description:
'Set the OTLP endpoint for telemetry. Overrides environment variables and settings files.',
})
.option('telemetry-log-prompts', {
type: 'boolean',
description:
'Enable or disable logging of user prompts for telemetry. Overrides settings files.',
})
.option('telemetry-outfile', {
type: 'string',
description: 'Redirect all telemetry output to the specified file.',
})
.option('checkpointing', {
alias: 'c',
type: 'boolean',
description: 'Enables checkpointing of file edits',
default: false,
})
.option('experimental-acp', {
type: 'boolean',
description: 'Starts the agent in ACP mode',
})
.option('allowed-mcp-server-names', {
type: 'array',
string: true,
description: 'Allowed MCP server names',
})
.option('extensions', {
alias: 'e',
type: 'array',
string: true,
description:
'A list of extensions to use. If not provided, all extensions are used.',
})
.option('list-extensions', {
alias: 'l',
type: 'boolean',
description: 'List all available extensions and exit.',
})
.option('ide-mode-feature', {
type: 'boolean',
description: 'Run in IDE mode?',
})
.option('proxy', {
type: 'string',
description:
'Proxy for gemini client, like schema://user:password@host:port',
})
.option('include-directories', {
type: 'array',
string: true,
description:
'Additional directories to include in the workspace (comma-separated or multiple --include-directories)',
coerce: (dirs: string[]) =>
// Handle comma-separated values
dirs.flatMap((dir) => dir.split(',').map((d) => d.trim())),
})
.option('openai-logging', {
type: 'boolean',
description:
'Enable logging of OpenAI API calls for debugging and analysis',
})
.option('openai-api-key', {
type: 'string',
description: 'OpenAI API key to use for authentication',
})
.option('openai-base-url', {
type: 'string',
description: 'OpenAI base URL (for custom endpoints)',
})
.option('tavily-api-key', {
type: 'string',
description: 'Tavily API key for web search functionality',
})
.check((argv) => {
if (argv.prompt && argv.promptInteractive) {
throw new Error(
'Cannot use both --prompt (-p) and --prompt-interactive (-i) together',
);
}
return true;
}),
)
.option('show-memory-usage', {
type: 'boolean',
description: 'Show memory usage in status bar',
default: false,
})
.option('show_memory_usage', {
type: 'boolean',
description: 'Show memory usage in status bar',
default: false,
})
.deprecateOption(
'show_memory_usage',
'Use --show-memory-usage instead. We will be removing --show_memory_usage in the coming weeks.',
)
.option('yolo', {
alias: 'y',
type: 'boolean',
description:
'Automatically accept all actions (aka YOLO mode, see https://www.youtube.com/watch?v=xvFZjo5PgG0 for more details)?',
default: false,
})
.option('telemetry', {
type: 'boolean',
description:
'Enable telemetry? This flag specifically controls if telemetry is sent. Other --telemetry-* flags set specific values but do not enable telemetry on their own.',
})
.option('telemetry-target', {
type: 'string',
choices: ['local', 'gcp'],
description:
'Set the telemetry target (local or gcp). Overrides settings files.',
})
.option('telemetry-otlp-endpoint', {
type: 'string',
description:
'Set the OTLP endpoint for telemetry. Overrides environment variables and settings files.',
})
.option('telemetry-log-prompts', {
type: 'boolean',
description:
'Enable or disable logging of user prompts for telemetry. Overrides settings files.',
})
.option('telemetry-outfile', {
type: 'string',
description: 'Redirect all telemetry output to the specified file.',
})
.option('checkpointing', {
alias: 'c',
type: 'boolean',
description: 'Enables checkpointing of file edits',
default: false,
})
.option('experimental-acp', {
type: 'boolean',
description: 'Starts the agent in ACP mode',
})
.option('allowed-mcp-server-names', {
type: 'array',
string: true,
description: 'Allowed MCP server names',
})
.option('extensions', {
alias: 'e',
type: 'array',
string: true,
description:
'A list of extensions to use. If not provided, all extensions are used.',
})
.option('list-extensions', {
alias: 'l',
type: 'boolean',
description: 'List all available extensions and exit.',
})
.option('ide-mode', {
type: 'boolean',
description: 'Run in IDE mode?',
})
.option('openai-logging', {
type: 'boolean',
description:
'Enable logging of OpenAI API calls for debugging and analysis',
})
.option('openai-api-key', {
type: 'string',
description: 'OpenAI API key to use for authentication',
})
.option('openai-base-url', {
type: 'string',
description: 'OpenAI base URL (for custom endpoints)',
})
.option('proxy', {
type: 'string',
description:
'Proxy for gemini client, like schema://user:password@host:port',
})
// Register MCP subcommands
.command(mcpCommand)
.version(await getCliVersion()) // This will enable the --version flag based on package.json
.alias('v', 'version')
.help()
.alias('h', 'help')
.strict()
.check((argv) => {
if (argv.prompt && argv.promptInteractive) {
throw new Error(
'Cannot use both --prompt (-p) and --prompt-interactive (-i) together',
);
}
return true;
});
.demandCommand(0, 0); // Allow base command to run with no subcommands
yargsInstance.wrap(yargsInstance.terminalWidth());
return yargsInstance.argv;
const result = await yargsInstance.parse();
// Handle case where MCP subcommands are executed - they should exit the process
// and not return to main CLI logic
if (result._.length > 0 && result._[0] === 'mcp') {
// MCP commands handle their own execution and process exit
process.exit(0);
}
// The import format is now only controlled by settings.memoryImportFormat
// We no longer accept it as a CLI argument
return result as unknown as CliArgs;
}
// This function is now a thin wrapper around the server's implementation.
@@ -238,25 +278,37 @@ export async function parseArguments(): Promise<CliArgs> {
// TODO: Consider if App.tsx should get memory via a server call or if Config should refresh itself.
export async function loadHierarchicalGeminiMemory(
currentWorkingDirectory: string,
includeDirectoriesToReadGemini: readonly string[] = [],
debugMode: boolean,
fileService: FileDiscoveryService,
settings: Settings,
extensionContextFilePaths: string[] = [],
memoryImportFormat: 'flat' | 'tree' = 'tree',
fileFilteringOptions?: FileFilteringOptions,
): Promise<{ memoryContent: string; fileCount: number }> {
// FIX: Use real, canonical paths for a reliable comparison to handle symlinks.
const realCwd = fs.realpathSync(path.resolve(currentWorkingDirectory));
const realHome = fs.realpathSync(path.resolve(homedir()));
const isHomeDirectory = realCwd === realHome;
// If it is the home directory, pass an empty string to the core memory
// function to signal that it should skip the workspace search.
const effectiveCwd = isHomeDirectory ? '' : currentWorkingDirectory;
if (debugMode) {
logger.debug(
`CLI: Delegating hierarchical memory load to server for CWD: ${currentWorkingDirectory}`,
`CLI: Delegating hierarchical memory load to server for CWD: ${currentWorkingDirectory} (memoryImportFormat: ${memoryImportFormat})`,
);
}
// Directly call the server function.
// The server function will use its own homedir() for the global path.
// Directly call the server function with the corrected path.
return loadServerHierarchicalMemory(
currentWorkingDirectory,
effectiveCwd,
includeDirectoriesToReadGemini,
debugMode,
fileService,
extensionContextFilePaths,
memoryImportFormat,
fileFilteringOptions,
settings.memoryDiscoveryMaxDirs,
);
@@ -272,17 +324,17 @@ export async function loadCliConfig(
argv.debug ||
[process.env.DEBUG, process.env.DEBUG_MODE].some(
(v) => v === 'true' || v === '1',
);
) ||
false;
const memoryImportFormat = settings.memoryImportFormat || 'tree';
const ideMode =
(argv.ideMode ?? settings.ideMode ?? false) &&
process.env.TERM_PROGRAM === 'vscode' &&
!process.env.SANDBOX;
const ideMode = settings.ideMode ?? false;
const ideModeFeature =
argv.ideModeFeature ?? settings.ideModeFeature ?? false;
let ideClient: IdeClient | undefined;
if (ideMode) {
ideClient = new IdeClient();
}
const folderTrustFeature = settings.folderTrustFeature ?? false;
const folderTrustSetting = settings.folderTrust ?? false;
const folderTrust = folderTrustFeature && folderTrustSetting;
const allExtensions = annotateActiveExtensions(
extensions,
@@ -302,6 +354,11 @@ export async function loadCliConfig(
process.env.OPENAI_BASE_URL = argv.openaiBaseUrl;
}
// Handle Tavily API key from command line
if (argv.tavilyApiKey) {
process.env.TAVILY_API_KEY = argv.tavilyApiKey;
}
// Set the context filename in the server's memoryTool module BEFORE loading memory
// TODO(b/343434939): This is a bit of a hack. The contextFileName should ideally be passed
// directly to the Config constructor in core, and have core handle setGeminiMdFilename.
@@ -324,28 +381,48 @@ export async function loadCliConfig(
...settings.fileFiltering,
};
const includeDirectories = (settings.includeDirectories || [])
.map(resolvePath)
.concat((argv.includeDirectories || []).map(resolvePath));
// Call the (now wrapper) loadHierarchicalGeminiMemory which calls the server's version
const { memoryContent, fileCount } = await loadHierarchicalGeminiMemory(
process.cwd(),
settings.loadMemoryFromIncludeDirectories ? includeDirectories : [],
debugMode,
fileService,
settings,
extensionContextFilePaths,
memoryImportFormat,
fileFiltering,
);
let mcpServers = mergeMcpServers(settings, activeExtensions);
const excludeTools = mergeExcludeTools(settings, activeExtensions);
const question = argv.promptInteractive || argv.prompt || '';
const approvalMode =
argv.yolo || false ? ApprovalMode.YOLO : ApprovalMode.DEFAULT;
const interactive =
!!argv.promptInteractive || (process.stdin.isTTY && question.length === 0);
// In non-interactive and non-yolo mode, exclude interactive built in tools.
const extraExcludes =
!interactive && approvalMode !== ApprovalMode.YOLO
? [ShellTool.Name, EditTool.Name, WriteFileTool.Name]
: undefined;
const excludeTools = mergeExcludeTools(
settings,
activeExtensions,
extraExcludes,
);
const blockedMcpServers: Array<{ name: string; extensionName: string }> = [];
if (!argv.allowedMcpServerNames) {
if (settings.allowMCPServers) {
const allowedNames = new Set(settings.allowMCPServers.filter(Boolean));
if (allowedNames.size > 0) {
mcpServers = Object.fromEntries(
Object.entries(mcpServers).filter(([key]) => allowedNames.has(key)),
);
}
mcpServers = allowedMcpServers(
mcpServers,
settings.allowMCPServers,
blockedMcpServers,
);
}
if (settings.excludeMCPServers) {
@@ -359,40 +436,26 @@ export async function loadCliConfig(
}
if (argv.allowedMcpServerNames) {
const allowedNames = new Set(argv.allowedMcpServerNames.filter(Boolean));
if (allowedNames.size > 0) {
mcpServers = Object.fromEntries(
Object.entries(mcpServers).filter(([key, server]) => {
const isAllowed = allowedNames.has(key);
if (!isAllowed) {
blockedMcpServers.push({
name: key,
extensionName: server.extensionName || '',
});
}
return isAllowed;
}),
);
} else {
blockedMcpServers.push(
...Object.entries(mcpServers).map(([key, server]) => ({
name: key,
extensionName: server.extensionName || '',
})),
);
mcpServers = {};
}
mcpServers = allowedMcpServers(
mcpServers,
argv.allowedMcpServerNames,
blockedMcpServers,
);
}
const sandboxConfig = await loadSandboxConfig(settings, argv);
const cliVersion = await getCliVersion();
return new Config({
sessionId,
embeddingModel: DEFAULT_GEMINI_EMBEDDING_MODEL,
sandbox: sandboxConfig,
targetDir: process.cwd(),
includeDirectories,
loadMemoryFromIncludeDirectories:
settings.loadMemoryFromIncludeDirectories || false,
debugMode,
question: argv.promptInteractive || argv.prompt || '',
question,
fullContext: argv.allFiles || argv.all_files || false,
coreTools: settings.coreTools || undefined,
excludeTools,
@@ -402,7 +465,7 @@ export async function loadCliConfig(
mcpServers,
userMemory: memoryContent,
geminiMdFileCount: fileCount,
approvalMode: argv.yolo || false ? ApprovalMode.YOLO : ApprovalMode.DEFAULT,
approvalMode,
showMemoryUsage:
argv.showMemoryUsage ||
argv.show_memory_usage ||
@@ -438,11 +501,10 @@ export async function loadCliConfig(
cwd: process.cwd(),
fileDiscoveryService: fileService,
bugCommand: settings.bugCommand,
model: argv.model!,
model: argv.model || settings.model || DEFAULT_GEMINI_MODEL,
extensionContextFilePaths,
maxSessionTurns: settings.maxSessionTurns ?? -1,
sessionTokenLimit: settings.sessionTokenLimit ?? 32000,
maxFolderItems: settings.maxFolderItems ?? 20,
sessionTokenLimit: settings.sessionTokenLimit ?? -1,
experimentalAcp: argv.experimentalAcp || false,
listExtensions: argv.listExtensions || false,
extensions: allExtensions,
@@ -450,13 +512,13 @@ export async function loadCliConfig(
noBrowser: !!process.env.NO_BROWSER,
summarizeToolOutput: settings.summarizeToolOutput,
ideMode,
ideClient,
ideModeFeature,
enableOpenAILogging:
(typeof argv.openaiLogging === 'undefined'
? settings.enableOpenAILogging
: argv.openaiLogging) ?? false,
sampling_params: settings.sampling_params,
systemPromptMappings: settings.systemPromptMappings ?? [
systemPromptMappings: (settings.systemPromptMappings ?? [
{
baseUrls: [
'https://dashscope.aliyuncs.com/compatible-mode/v1/',
@@ -466,11 +528,49 @@ export async function loadCliConfig(
template:
'SYSTEM_TEMPLATE:{"name":"qwen3_coder","params":{"is_git_repository":{RUNTIME_VARS_IS_GIT_REPO},"sandbox":"{RUNTIME_VARS_SANDBOX}"}}',
},
],
]) as ConfigParameters['systemPromptMappings'],
contentGenerator: settings.contentGenerator,
cliVersion,
tavilyApiKey:
argv.tavilyApiKey || settings.tavilyApiKey || process.env.TAVILY_API_KEY,
chatCompression: settings.chatCompression,
folderTrustFeature,
folderTrust,
interactive,
});
}
function allowedMcpServers(
mcpServers: { [x: string]: MCPServerConfig },
allowMCPServers: string[],
blockedMcpServers: Array<{ name: string; extensionName: string }>,
) {
const allowedNames = new Set(allowMCPServers.filter(Boolean));
if (allowedNames.size > 0) {
mcpServers = Object.fromEntries(
Object.entries(mcpServers).filter(([key, server]) => {
const isAllowed = allowedNames.has(key);
if (!isAllowed) {
blockedMcpServers.push({
name: key,
extensionName: server.extensionName || '',
});
}
return isAllowed;
}),
);
} else {
blockedMcpServers.push(
...Object.entries(mcpServers).map(([key, server]) => ({
name: key,
extensionName: server.extensionName || '',
})),
);
mcpServers = {};
}
return mcpServers;
}
function mergeMcpServers(settings: Settings, extensions: Extension[]) {
const mcpServers = { ...(settings.mcpServers || {}) };
for (const extension of extensions) {
@@ -495,8 +595,12 @@ function mergeMcpServers(settings: Settings, extensions: Extension[]) {
function mergeExcludeTools(
settings: Settings,
extensions: Extension[],
extraExcludes?: string[] | undefined,
): string[] {
const allExcludeTools = new Set(settings.excludeTools || []);
const allExcludeTools = new Set([
...(settings.excludeTools || []),
...(extraExcludes || []),
]);
for (const extension of extensions) {
for (const tool of extension.config.excludeTools || []) {
allExcludeTools.add(tool);

View File

@@ -29,10 +29,10 @@ describe('loadExtensions', () => {
beforeEach(() => {
tempWorkspaceDir = fs.mkdtempSync(
path.join(os.tmpdir(), 'gemini-cli-test-workspace-'),
path.join(os.tmpdir(), 'qwen-code-test-workspace-'),
);
tempHomeDir = fs.mkdtempSync(
path.join(os.tmpdir(), 'gemini-cli-test-home-'),
path.join(os.tmpdir(), 'qwen-code-test-home-'),
);
vi.mocked(os.homedir).mockReturnValue(tempHomeDir);
});
@@ -42,6 +42,81 @@ describe('loadExtensions', () => {
fs.rmSync(tempHomeDir, { recursive: true, force: true });
});
it('should include extension path in loaded extension', () => {
const workspaceExtensionsDir = path.join(
tempWorkspaceDir,
EXTENSIONS_DIRECTORY_NAME,
);
fs.mkdirSync(workspaceExtensionsDir, { recursive: true });
const extensionDir = path.join(workspaceExtensionsDir, 'test-extension');
fs.mkdirSync(extensionDir, { recursive: true });
const config = {
name: 'test-extension',
version: '1.0.0',
};
fs.writeFileSync(
path.join(extensionDir, EXTENSIONS_CONFIG_FILENAME),
JSON.stringify(config),
);
const extensions = loadExtensions(tempWorkspaceDir);
expect(extensions).toHaveLength(1);
expect(extensions[0].path).toBe(extensionDir);
expect(extensions[0].config.name).toBe('test-extension');
});
it('should include extension path in loaded extension', () => {
const workspaceExtensionsDir = path.join(
tempWorkspaceDir,
EXTENSIONS_DIRECTORY_NAME,
);
fs.mkdirSync(workspaceExtensionsDir, { recursive: true });
const extensionDir = path.join(workspaceExtensionsDir, 'test-extension');
fs.mkdirSync(extensionDir, { recursive: true });
const config = {
name: 'test-extension',
version: '1.0.0',
};
fs.writeFileSync(
path.join(extensionDir, EXTENSIONS_CONFIG_FILENAME),
JSON.stringify(config),
);
const extensions = loadExtensions(tempWorkspaceDir);
expect(extensions).toHaveLength(1);
expect(extensions[0].path).toBe(extensionDir);
expect(extensions[0].config.name).toBe('test-extension');
});
it('should include extension path in loaded extension', () => {
const workspaceExtensionsDir = path.join(
tempWorkspaceDir,
EXTENSIONS_DIRECTORY_NAME,
);
fs.mkdirSync(workspaceExtensionsDir, { recursive: true });
const extensionDir = path.join(workspaceExtensionsDir, 'test-extension');
fs.mkdirSync(extensionDir, { recursive: true });
const config = {
name: 'test-extension',
version: '1.0.0',
};
fs.writeFileSync(
path.join(extensionDir, EXTENSIONS_CONFIG_FILENAME),
JSON.stringify(config),
);
const extensions = loadExtensions(tempWorkspaceDir);
expect(extensions).toHaveLength(1);
expect(extensions[0].path).toBe(extensionDir);
expect(extensions[0].config.name).toBe('test-extension');
});
it('should load context file path when QWEN.md is present', () => {
const workspaceExtensionsDir = path.join(
tempWorkspaceDir,

View File

@@ -10,9 +10,11 @@ import * as path from 'path';
import * as os from 'os';
export const EXTENSIONS_DIRECTORY_NAME = path.join('.qwen', 'extensions');
export const EXTENSIONS_CONFIG_FILENAME = 'gemini-extension.json';
export const EXTENSIONS_CONFIG_FILENAME = 'qwen-extension.json';
export const EXTENSIONS_CONFIG_FILENAME_OLD = 'gemini-extension.json';
export interface Extension {
path: string;
config: ExtensionConfig;
contextFiles: string[];
}
@@ -67,12 +69,19 @@ function loadExtension(extensionDir: string): Extension | null {
return null;
}
const configFilePath = path.join(extensionDir, EXTENSIONS_CONFIG_FILENAME);
let configFilePath = path.join(extensionDir, EXTENSIONS_CONFIG_FILENAME);
if (!fs.existsSync(configFilePath)) {
console.error(
`Warning: extension directory ${extensionDir} does not contain a config file ${configFilePath}.`,
const oldConfigFilePath = path.join(
extensionDir,
EXTENSIONS_CONFIG_FILENAME_OLD,
);
return null;
if (!fs.existsSync(oldConfigFilePath)) {
console.error(
`Warning: extension directory ${extensionDir} does not contain a config file ${configFilePath}.`,
);
return null;
}
configFilePath = oldConfigFilePath;
}
try {
@@ -90,6 +99,7 @@ function loadExtension(extensionDir: string): Extension | null {
.filter((contextFilePath) => fs.existsSync(contextFilePath));
return {
path: extensionDir,
config,
contextFiles,
};
@@ -121,6 +131,7 @@ export function annotateActiveExtensions(
name: extension.config.name,
version: extension.config.version,
isActive: true,
path: extension.path,
}));
}
@@ -136,6 +147,7 @@ export function annotateActiveExtensions(
name: extension.config.name,
version: extension.config.version,
isActive: false,
path: extension.path,
}));
}
@@ -153,6 +165,7 @@ export function annotateActiveExtensions(
name: extension.config.name,
version: extension.config.version,
isActive,
path: extension.path,
});
}

View File

@@ -0,0 +1,62 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import { describe, it, expect } from 'vitest';
import {
Command,
KeyBindingConfig,
defaultKeyBindings,
} from './keyBindings.js';
describe('keyBindings config', () => {
describe('defaultKeyBindings', () => {
it('should have bindings for all commands', () => {
const commands = Object.values(Command);
for (const command of commands) {
expect(defaultKeyBindings[command]).toBeDefined();
expect(Array.isArray(defaultKeyBindings[command])).toBe(true);
}
});
it('should have valid key binding structures', () => {
for (const [_, bindings] of Object.entries(defaultKeyBindings)) {
for (const binding of bindings) {
// Each binding should have either key or sequence, but not both
const hasKey = binding.key !== undefined;
const hasSequence = binding.sequence !== undefined;
expect(hasKey || hasSequence).toBe(true);
expect(hasKey && hasSequence).toBe(false);
// Modifier properties should be boolean or undefined
if (binding.ctrl !== undefined) {
expect(typeof binding.ctrl).toBe('boolean');
}
if (binding.shift !== undefined) {
expect(typeof binding.shift).toBe('boolean');
}
if (binding.command !== undefined) {
expect(typeof binding.command).toBe('boolean');
}
if (binding.paste !== undefined) {
expect(typeof binding.paste).toBe('boolean');
}
}
}
});
it('should export all required types', () => {
// Basic type checks
expect(typeof Command.HOME).toBe('string');
expect(typeof Command.END).toBe('string');
// Config should be readonly
const config: KeyBindingConfig = defaultKeyBindings;
expect(config[Command.HOME]).toBeDefined();
});
});
});

View File

@@ -0,0 +1,179 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
/**
* Command enum for all available keyboard shortcuts
*/
export enum Command {
// Basic bindings
RETURN = 'return',
ESCAPE = 'escape',
// Cursor movement
HOME = 'home',
END = 'end',
// Text deletion
KILL_LINE_RIGHT = 'killLineRight',
KILL_LINE_LEFT = 'killLineLeft',
CLEAR_INPUT = 'clearInput',
// Screen control
CLEAR_SCREEN = 'clearScreen',
// History navigation
HISTORY_UP = 'historyUp',
HISTORY_DOWN = 'historyDown',
NAVIGATION_UP = 'navigationUp',
NAVIGATION_DOWN = 'navigationDown',
// Auto-completion
ACCEPT_SUGGESTION = 'acceptSuggestion',
COMPLETION_UP = 'completionUp',
COMPLETION_DOWN = 'completionDown',
// Text input
SUBMIT = 'submit',
NEWLINE = 'newline',
// External tools
OPEN_EXTERNAL_EDITOR = 'openExternalEditor',
PASTE_CLIPBOARD_IMAGE = 'pasteClipboardImage',
// App level bindings
SHOW_ERROR_DETAILS = 'showErrorDetails',
TOGGLE_TOOL_DESCRIPTIONS = 'toggleToolDescriptions',
TOGGLE_IDE_CONTEXT_DETAIL = 'toggleIDEContextDetail',
QUIT = 'quit',
EXIT = 'exit',
SHOW_MORE_LINES = 'showMoreLines',
// Shell commands
REVERSE_SEARCH = 'reverseSearch',
SUBMIT_REVERSE_SEARCH = 'submitReverseSearch',
ACCEPT_SUGGESTION_REVERSE_SEARCH = 'acceptSuggestionReverseSearch',
}
/**
* Data-driven key binding structure for user configuration
*/
export interface KeyBinding {
/** The key name (e.g., 'a', 'return', 'tab', 'escape') */
key?: string;
/** The key sequence (e.g., '\x18' for Ctrl+X) - alternative to key name */
sequence?: string;
/** Control key requirement: true=must be pressed, false=must not be pressed, undefined=ignore */
ctrl?: boolean;
/** Shift key requirement: true=must be pressed, false=must not be pressed, undefined=ignore */
shift?: boolean;
/** Command/meta key requirement: true=must be pressed, false=must not be pressed, undefined=ignore */
command?: boolean;
/** Paste operation requirement: true=must be paste, false=must not be paste, undefined=ignore */
paste?: boolean;
}
/**
* Configuration type mapping commands to their key bindings
*/
export type KeyBindingConfig = {
readonly [C in Command]: readonly KeyBinding[];
};
/**
* Default key binding configuration
* Matches the original hard-coded logic exactly
*/
export const defaultKeyBindings: KeyBindingConfig = {
// Basic bindings
[Command.RETURN]: [{ key: 'return' }],
// Original: key.name === 'escape'
[Command.ESCAPE]: [{ key: 'escape' }],
// Cursor movement
// Original: key.ctrl && key.name === 'a'
[Command.HOME]: [{ key: 'a', ctrl: true }],
// Original: key.ctrl && key.name === 'e'
[Command.END]: [{ key: 'e', ctrl: true }],
// Text deletion
// Original: key.ctrl && key.name === 'k'
[Command.KILL_LINE_RIGHT]: [{ key: 'k', ctrl: true }],
// Original: key.ctrl && key.name === 'u'
[Command.KILL_LINE_LEFT]: [{ key: 'u', ctrl: true }],
// Original: key.ctrl && key.name === 'c'
[Command.CLEAR_INPUT]: [{ key: 'c', ctrl: true }],
// Screen control
// Original: key.ctrl && key.name === 'l'
[Command.CLEAR_SCREEN]: [{ key: 'l', ctrl: true }],
// History navigation
// Original: key.ctrl && key.name === 'p'
[Command.HISTORY_UP]: [{ key: 'p', ctrl: true }],
// Original: key.ctrl && key.name === 'n'
[Command.HISTORY_DOWN]: [{ key: 'n', ctrl: true }],
// Original: key.name === 'up'
[Command.NAVIGATION_UP]: [{ key: 'up' }],
// Original: key.name === 'down'
[Command.NAVIGATION_DOWN]: [{ key: 'down' }],
// Auto-completion
// Original: key.name === 'tab' || (key.name === 'return' && !key.ctrl)
[Command.ACCEPT_SUGGESTION]: [{ key: 'tab' }, { key: 'return', ctrl: false }],
// Completion navigation (arrow or Ctrl+P/N)
[Command.COMPLETION_UP]: [{ key: 'up' }, { key: 'p', ctrl: true }],
[Command.COMPLETION_DOWN]: [{ key: 'down' }, { key: 'n', ctrl: true }],
// Text input
// Original: key.name === 'return' && !key.ctrl && !key.meta && !key.paste
[Command.SUBMIT]: [
{
key: 'return',
ctrl: false,
command: false,
paste: false,
},
],
// Original: key.name === 'return' && (key.ctrl || key.meta || key.paste)
// Split into multiple data-driven bindings
[Command.NEWLINE]: [
{ key: 'return', ctrl: true },
{ key: 'return', command: true },
{ key: 'return', paste: true },
],
// External tools
// Original: key.ctrl && (key.name === 'x' || key.sequence === '\x18')
[Command.OPEN_EXTERNAL_EDITOR]: [
{ key: 'x', ctrl: true },
{ sequence: '\x18', ctrl: true },
],
// Original: key.ctrl && key.name === 'v'
[Command.PASTE_CLIPBOARD_IMAGE]: [{ key: 'v', ctrl: true }],
// App level bindings
// Original: key.ctrl && key.name === 'o'
[Command.SHOW_ERROR_DETAILS]: [{ key: 'o', ctrl: true }],
// Original: key.ctrl && key.name === 't'
[Command.TOGGLE_TOOL_DESCRIPTIONS]: [{ key: 't', ctrl: true }],
// Original: key.ctrl && key.name === 'e'
[Command.TOGGLE_IDE_CONTEXT_DETAIL]: [{ key: 'e', ctrl: true }],
// Original: key.ctrl && (key.name === 'c' || key.name === 'C')
[Command.QUIT]: [{ key: 'c', ctrl: true }],
// Original: key.ctrl && (key.name === 'd' || key.name === 'D')
[Command.EXIT]: [{ key: 'd', ctrl: true }],
// Original: key.ctrl && key.name === 's'
[Command.SHOW_MORE_LINES]: [{ key: 's', ctrl: true }],
// Shell commands
// Original: key.ctrl && key.name === 'r'
[Command.REVERSE_SEARCH]: [{ key: 'r', ctrl: true }],
// Original: key.name === 'return' && !key.ctrl
// Note: original logic ONLY checked ctrl=false, ignored meta/shift/paste
[Command.SUBMIT_REVERSE_SEARCH]: [{ key: 'return', ctrl: false }],
// Original: key.name === 'tab'
[Command.ACCEPT_SUGGESTION_REVERSE_SEARCH]: [{ key: 'tab' }],
};

View File

@@ -59,7 +59,21 @@ const MOCK_WORKSPACE_SETTINGS_PATH = pathActual.join(
'settings.json',
);
vi.mock('fs');
vi.mock('fs', async (importOriginal) => {
// Get all the functions from the real 'fs' module
const actualFs = await importOriginal<typeof fs>();
return {
...actualFs, // Keep all the real functions
// Now, just override the ones we need for the test
existsSync: vi.fn(),
readFileSync: vi.fn(),
writeFileSync: vi.fn(),
mkdirSync: vi.fn(),
realpathSync: (p: string) => p,
};
});
vi.mock('strip-json-comments', () => ({
default: vi.fn((content) => content),
}));
@@ -98,6 +112,8 @@ describe('Settings Loading and Merging', () => {
expect(settings.merged).toEqual({
customThemes: {},
mcpServers: {},
includeDirectories: [],
chatCompression: {},
});
expect(settings.errors.length).toBe(0);
});
@@ -131,6 +147,8 @@ describe('Settings Loading and Merging', () => {
...systemSettingsContent,
customThemes: {},
mcpServers: {},
includeDirectories: [],
chatCompression: {},
});
});
@@ -164,6 +182,8 @@ describe('Settings Loading and Merging', () => {
...userSettingsContent,
customThemes: {},
mcpServers: {},
includeDirectories: [],
chatCompression: {},
});
});
@@ -195,6 +215,8 @@ describe('Settings Loading and Merging', () => {
...workspaceSettingsContent,
customThemes: {},
mcpServers: {},
includeDirectories: [],
chatCompression: {},
});
});
@@ -232,6 +254,8 @@ describe('Settings Loading and Merging', () => {
contextFileName: 'WORKSPACE_CONTEXT.md',
customThemes: {},
mcpServers: {},
includeDirectories: [],
chatCompression: {},
});
});
@@ -281,9 +305,67 @@ describe('Settings Loading and Merging', () => {
allowMCPServers: ['server1', 'server2'],
customThemes: {},
mcpServers: {},
includeDirectories: [],
chatCompression: {},
});
});
it('should ignore folderTrust from workspace settings', () => {
(mockFsExistsSync as Mock).mockReturnValue(true);
const userSettingsContent = {
folderTrust: true,
};
const workspaceSettingsContent = {
folderTrust: false, // This should be ignored
};
const systemSettingsContent = {
// No folderTrust here
};
(fs.readFileSync as Mock).mockImplementation(
(p: fs.PathOrFileDescriptor) => {
if (p === getSystemSettingsPath())
return JSON.stringify(systemSettingsContent);
if (p === USER_SETTINGS_PATH)
return JSON.stringify(userSettingsContent);
if (p === MOCK_WORKSPACE_SETTINGS_PATH)
return JSON.stringify(workspaceSettingsContent);
return '{}';
},
);
const settings = loadSettings(MOCK_WORKSPACE_DIR);
expect(settings.merged.folderTrust).toBe(true); // User setting should be used
});
it('should use system folderTrust over user setting', () => {
(mockFsExistsSync as Mock).mockReturnValue(true);
const userSettingsContent = {
folderTrust: false,
};
const workspaceSettingsContent = {
folderTrust: true, // This should be ignored
};
const systemSettingsContent = {
folderTrust: true,
};
(fs.readFileSync as Mock).mockImplementation(
(p: fs.PathOrFileDescriptor) => {
if (p === getSystemSettingsPath())
return JSON.stringify(systemSettingsContent);
if (p === USER_SETTINGS_PATH)
return JSON.stringify(userSettingsContent);
if (p === MOCK_WORKSPACE_SETTINGS_PATH)
return JSON.stringify(workspaceSettingsContent);
return '{}';
},
);
const settings = loadSettings(MOCK_WORKSPACE_DIR);
expect(settings.merged.folderTrust).toBe(true); // System setting should be used
});
it('should handle contextFileName correctly when only in user settings', () => {
(mockFsExistsSync as Mock).mockImplementation(
(p: fs.PathLike) => p === USER_SETTINGS_PATH,
@@ -320,6 +402,86 @@ describe('Settings Loading and Merging', () => {
expect(settings.merged.contextFileName).toBe('PROJECT_SPECIFIC.md');
});
it('should handle excludedProjectEnvVars correctly when only in user settings', () => {
(mockFsExistsSync as Mock).mockImplementation(
(p: fs.PathLike) => p === USER_SETTINGS_PATH,
);
const userSettingsContent = {
excludedProjectEnvVars: ['DEBUG', 'NODE_ENV', 'CUSTOM_VAR'],
};
(fs.readFileSync as Mock).mockImplementation(
(p: fs.PathOrFileDescriptor) => {
if (p === USER_SETTINGS_PATH)
return JSON.stringify(userSettingsContent);
return '';
},
);
const settings = loadSettings(MOCK_WORKSPACE_DIR);
expect(settings.merged.excludedProjectEnvVars).toEqual([
'DEBUG',
'NODE_ENV',
'CUSTOM_VAR',
]);
});
it('should handle excludedProjectEnvVars correctly when only in workspace settings', () => {
(mockFsExistsSync as Mock).mockImplementation(
(p: fs.PathLike) => p === MOCK_WORKSPACE_SETTINGS_PATH,
);
const workspaceSettingsContent = {
excludedProjectEnvVars: ['WORKSPACE_DEBUG', 'WORKSPACE_VAR'],
};
(fs.readFileSync as Mock).mockImplementation(
(p: fs.PathOrFileDescriptor) => {
if (p === MOCK_WORKSPACE_SETTINGS_PATH)
return JSON.stringify(workspaceSettingsContent);
return '';
},
);
const settings = loadSettings(MOCK_WORKSPACE_DIR);
expect(settings.merged.excludedProjectEnvVars).toEqual([
'WORKSPACE_DEBUG',
'WORKSPACE_VAR',
]);
});
it('should merge excludedProjectEnvVars with workspace taking precedence over user', () => {
(mockFsExistsSync as Mock).mockReturnValue(true);
const userSettingsContent = {
excludedProjectEnvVars: ['DEBUG', 'NODE_ENV', 'USER_VAR'],
};
const workspaceSettingsContent = {
excludedProjectEnvVars: ['WORKSPACE_DEBUG', 'WORKSPACE_VAR'],
};
(fs.readFileSync as Mock).mockImplementation(
(p: fs.PathOrFileDescriptor) => {
if (p === USER_SETTINGS_PATH)
return JSON.stringify(userSettingsContent);
if (p === MOCK_WORKSPACE_SETTINGS_PATH)
return JSON.stringify(workspaceSettingsContent);
return '';
},
);
const settings = loadSettings(MOCK_WORKSPACE_DIR);
expect(settings.user.settings.excludedProjectEnvVars).toEqual([
'DEBUG',
'NODE_ENV',
'USER_VAR',
]);
expect(settings.workspace.settings.excludedProjectEnvVars).toEqual([
'WORKSPACE_DEBUG',
'WORKSPACE_VAR',
]);
expect(settings.merged.excludedProjectEnvVars).toEqual([
'WORKSPACE_DEBUG',
'WORKSPACE_VAR',
]);
});
it('should default contextFileName to undefined if not in any settings file', () => {
(mockFsExistsSync as Mock).mockReturnValue(true);
const userSettingsContent = { theme: 'dark' };
@@ -522,6 +684,150 @@ describe('Settings Loading and Merging', () => {
expect(settings.merged.mcpServers).toEqual({});
});
it('should merge chatCompression settings, with workspace taking precedence', () => {
(mockFsExistsSync as Mock).mockReturnValue(true);
const userSettingsContent = {
chatCompression: { contextPercentageThreshold: 0.5 },
};
const workspaceSettingsContent = {
chatCompression: { contextPercentageThreshold: 0.8 },
};
(fs.readFileSync as Mock).mockImplementation(
(p: fs.PathOrFileDescriptor) => {
if (p === USER_SETTINGS_PATH)
return JSON.stringify(userSettingsContent);
if (p === MOCK_WORKSPACE_SETTINGS_PATH)
return JSON.stringify(workspaceSettingsContent);
return '{}';
},
);
const settings = loadSettings(MOCK_WORKSPACE_DIR);
expect(settings.user.settings.chatCompression).toEqual({
contextPercentageThreshold: 0.5,
});
expect(settings.workspace.settings.chatCompression).toEqual({
contextPercentageThreshold: 0.8,
});
expect(settings.merged.chatCompression).toEqual({
contextPercentageThreshold: 0.8,
});
});
it('should handle chatCompression when only in user settings', () => {
(mockFsExistsSync as Mock).mockImplementation(
(p: fs.PathLike) => p === USER_SETTINGS_PATH,
);
const userSettingsContent = {
chatCompression: { contextPercentageThreshold: 0.5 },
};
(fs.readFileSync as Mock).mockImplementation(
(p: fs.PathOrFileDescriptor) => {
if (p === USER_SETTINGS_PATH)
return JSON.stringify(userSettingsContent);
return '{}';
},
);
const settings = loadSettings(MOCK_WORKSPACE_DIR);
expect(settings.merged.chatCompression).toEqual({
contextPercentageThreshold: 0.5,
});
});
it('should have chatCompression as an empty object if not in any settings file', () => {
(mockFsExistsSync as Mock).mockReturnValue(false); // No settings files exist
(fs.readFileSync as Mock).mockReturnValue('{}');
const settings = loadSettings(MOCK_WORKSPACE_DIR);
expect(settings.merged.chatCompression).toEqual({});
});
it('should ignore chatCompression if contextPercentageThreshold is invalid', () => {
const warnSpy = vi.spyOn(console, 'warn').mockImplementation(() => {});
(mockFsExistsSync as Mock).mockImplementation(
(p: fs.PathLike) => p === USER_SETTINGS_PATH,
);
const userSettingsContent = {
chatCompression: { contextPercentageThreshold: 1.5 },
};
(fs.readFileSync as Mock).mockImplementation(
(p: fs.PathOrFileDescriptor) => {
if (p === USER_SETTINGS_PATH)
return JSON.stringify(userSettingsContent);
return '{}';
},
);
const settings = loadSettings(MOCK_WORKSPACE_DIR);
expect(settings.merged.chatCompression).toBeUndefined();
expect(warnSpy).toHaveBeenCalledWith(
'Invalid value for chatCompression.contextPercentageThreshold: "1.5". Please use a value between 0 and 1. Using default compression settings.',
);
warnSpy.mockRestore();
});
it('should deep merge chatCompression settings', () => {
(mockFsExistsSync as Mock).mockReturnValue(true);
const userSettingsContent = {
chatCompression: { contextPercentageThreshold: 0.5 },
};
const workspaceSettingsContent = {
chatCompression: {},
};
(fs.readFileSync as Mock).mockImplementation(
(p: fs.PathOrFileDescriptor) => {
if (p === USER_SETTINGS_PATH)
return JSON.stringify(userSettingsContent);
if (p === MOCK_WORKSPACE_SETTINGS_PATH)
return JSON.stringify(workspaceSettingsContent);
return '{}';
},
);
const settings = loadSettings(MOCK_WORKSPACE_DIR);
expect(settings.merged.chatCompression).toEqual({
contextPercentageThreshold: 0.5,
});
});
it('should merge includeDirectories from all scopes', () => {
(mockFsExistsSync as Mock).mockReturnValue(true);
const systemSettingsContent = {
includeDirectories: ['/system/dir'],
};
const userSettingsContent = {
includeDirectories: ['/user/dir1', '/user/dir2'],
};
const workspaceSettingsContent = {
includeDirectories: ['/workspace/dir'],
};
(fs.readFileSync as Mock).mockImplementation(
(p: fs.PathOrFileDescriptor) => {
if (p === getSystemSettingsPath())
return JSON.stringify(systemSettingsContent);
if (p === USER_SETTINGS_PATH)
return JSON.stringify(userSettingsContent);
if (p === MOCK_WORKSPACE_SETTINGS_PATH)
return JSON.stringify(workspaceSettingsContent);
return '{}';
},
);
const settings = loadSettings(MOCK_WORKSPACE_DIR);
expect(settings.merged.includeDirectories).toEqual([
'/system/dir',
'/user/dir1',
'/user/dir2',
'/workspace/dir',
]);
});
it('should handle JSON parsing errors gracefully', () => {
(mockFsExistsSync as Mock).mockReturnValue(true); // Both files "exist"
const invalidJsonContent = 'invalid json';
@@ -560,6 +866,8 @@ describe('Settings Loading and Merging', () => {
expect(settings.merged).toEqual({
customThemes: {},
mcpServers: {},
includeDirectories: [],
chatCompression: {},
});
// Check that error objects are populated in settings.errors
@@ -777,6 +1085,48 @@ describe('Settings Loading and Merging', () => {
}
});
it('should correctly merge dnsResolutionOrder with workspace taking precedence', () => {
(mockFsExistsSync as Mock).mockReturnValue(true);
const userSettingsContent = {
dnsResolutionOrder: 'ipv4first',
};
const workspaceSettingsContent = {
dnsResolutionOrder: 'verbatim',
};
(fs.readFileSync as Mock).mockImplementation(
(p: fs.PathOrFileDescriptor) => {
if (p === USER_SETTINGS_PATH)
return JSON.stringify(userSettingsContent);
if (p === MOCK_WORKSPACE_SETTINGS_PATH)
return JSON.stringify(workspaceSettingsContent);
return '{}';
},
);
const settings = loadSettings(MOCK_WORKSPACE_DIR);
expect(settings.merged.dnsResolutionOrder).toBe('verbatim');
});
it('should use user dnsResolutionOrder if workspace is not defined', () => {
(mockFsExistsSync as Mock).mockImplementation(
(p: fs.PathLike) => p === USER_SETTINGS_PATH,
);
const userSettingsContent = {
dnsResolutionOrder: 'verbatim',
};
(fs.readFileSync as Mock).mockImplementation(
(p: fs.PathOrFileDescriptor) => {
if (p === USER_SETTINGS_PATH)
return JSON.stringify(userSettingsContent);
return '{}';
},
);
const settings = loadSettings(MOCK_WORKSPACE_DIR);
expect(settings.merged.dnsResolutionOrder).toBe('verbatim');
});
it('should leave unresolved environment variables as is', () => {
const userSettingsContent = { apiKey: '$UNDEFINED_VAR' };
(mockFsExistsSync as Mock).mockImplementation(
@@ -954,6 +1304,8 @@ describe('Settings Loading and Merging', () => {
...systemSettingsContent,
customThemes: {},
mcpServers: {},
includeDirectories: [],
chatCompression: {},
});
});
});
@@ -999,4 +1351,140 @@ describe('Settings Loading and Merging', () => {
expect(loadedSettings.merged.theme).toBe('ocean');
});
});
describe('excludedProjectEnvVars integration', () => {
const originalEnv = { ...process.env };
beforeEach(() => {
process.env = { ...originalEnv };
});
afterEach(() => {
process.env = originalEnv;
});
it('should exclude DEBUG and DEBUG_MODE from project .env files by default', () => {
// Create a workspace settings file with excludedProjectEnvVars
const workspaceSettingsContent = {
excludedProjectEnvVars: ['DEBUG', 'DEBUG_MODE'],
};
(mockFsExistsSync as Mock).mockImplementation(
(p: fs.PathLike) => p === MOCK_WORKSPACE_SETTINGS_PATH,
);
(fs.readFileSync as Mock).mockImplementation(
(p: fs.PathOrFileDescriptor) => {
if (p === MOCK_WORKSPACE_SETTINGS_PATH)
return JSON.stringify(workspaceSettingsContent);
return '{}';
},
);
// Mock findEnvFile to return a project .env file
const originalFindEnvFile = (
loadSettings as unknown as { findEnvFile: () => string }
).findEnvFile;
(loadSettings as unknown as { findEnvFile: () => string }).findEnvFile =
() => '/mock/project/.env';
// Mock fs.readFileSync for .env file content
const originalReadFileSync = fs.readFileSync;
(fs.readFileSync as Mock).mockImplementation(
(p: fs.PathOrFileDescriptor) => {
if (p === '/mock/project/.env') {
return 'DEBUG=true\nDEBUG_MODE=1\nGEMINI_API_KEY=test-key';
}
if (p === MOCK_WORKSPACE_SETTINGS_PATH) {
return JSON.stringify(workspaceSettingsContent);
}
return '{}';
},
);
try {
// This will call loadEnvironment internally with the merged settings
const settings = loadSettings(MOCK_WORKSPACE_DIR);
// Verify the settings were loaded correctly
expect(settings.merged.excludedProjectEnvVars).toEqual([
'DEBUG',
'DEBUG_MODE',
]);
// Note: We can't directly test process.env changes here because the mocking
// prevents the actual file system operations, but we can verify the settings
// are correctly merged and passed to loadEnvironment
} finally {
(loadSettings as unknown as { findEnvFile: () => string }).findEnvFile =
originalFindEnvFile;
(fs.readFileSync as Mock).mockImplementation(originalReadFileSync);
}
});
it('should respect custom excludedProjectEnvVars from user settings', () => {
const userSettingsContent = {
excludedProjectEnvVars: ['NODE_ENV', 'DEBUG'],
};
(mockFsExistsSync as Mock).mockImplementation(
(p: fs.PathLike) => p === USER_SETTINGS_PATH,
);
(fs.readFileSync as Mock).mockImplementation(
(p: fs.PathOrFileDescriptor) => {
if (p === USER_SETTINGS_PATH)
return JSON.stringify(userSettingsContent);
return '{}';
},
);
const settings = loadSettings(MOCK_WORKSPACE_DIR);
expect(settings.user.settings.excludedProjectEnvVars).toEqual([
'NODE_ENV',
'DEBUG',
]);
expect(settings.merged.excludedProjectEnvVars).toEqual([
'NODE_ENV',
'DEBUG',
]);
});
it('should merge excludedProjectEnvVars with workspace taking precedence', () => {
const userSettingsContent = {
excludedProjectEnvVars: ['DEBUG', 'NODE_ENV', 'USER_VAR'],
};
const workspaceSettingsContent = {
excludedProjectEnvVars: ['WORKSPACE_DEBUG', 'WORKSPACE_VAR'],
};
(mockFsExistsSync as Mock).mockReturnValue(true);
(fs.readFileSync as Mock).mockImplementation(
(p: fs.PathOrFileDescriptor) => {
if (p === USER_SETTINGS_PATH)
return JSON.stringify(userSettingsContent);
if (p === MOCK_WORKSPACE_SETTINGS_PATH)
return JSON.stringify(workspaceSettingsContent);
return '{}';
},
);
const settings = loadSettings(MOCK_WORKSPACE_DIR);
expect(settings.user.settings.excludedProjectEnvVars).toEqual([
'DEBUG',
'NODE_ENV',
'USER_VAR',
]);
expect(settings.workspace.settings.excludedProjectEnvVars).toEqual([
'WORKSPACE_DEBUG',
'WORKSPACE_VAR',
]);
expect(settings.merged.excludedProjectEnvVars).toEqual([
'WORKSPACE_DEBUG',
'WORKSPACE_VAR',
]);
});
});
});

View File

@@ -9,21 +9,20 @@ import * as path from 'path';
import { homedir, platform } from 'os';
import * as dotenv from 'dotenv';
import {
MCPServerConfig,
GEMINI_CONFIG_DIR as GEMINI_DIR,
getErrorMessage,
BugCommandSettings,
TelemetrySettings,
AuthType,
} from '@qwen-code/qwen-code-core';
import stripJsonComments from 'strip-json-comments';
import { DefaultLight } from '../ui/themes/default-light.js';
import { DefaultDark } from '../ui/themes/default.js';
import { CustomTheme } from '../ui/themes/theme.js';
import { Settings, MemoryImportFormat } from './settingsSchema.js';
export type { Settings, MemoryImportFormat };
export const SETTINGS_DIRECTORY_NAME = '.qwen';
export const USER_SETTINGS_DIR = path.join(homedir(), SETTINGS_DIRECTORY_NAME);
export const USER_SETTINGS_PATH = path.join(USER_SETTINGS_DIR, 'settings.json');
export const DEFAULT_EXCLUDED_ENV_VARS = ['DEBUG', 'DEBUG_MODE'];
export function getSystemSettingsPath(): string {
if (process.env.GEMINI_CLI_SYSTEM_SETTINGS_PATH) {
@@ -38,6 +37,12 @@ export function getSystemSettingsPath(): string {
}
}
export function getWorkspaceSettingsPath(workspaceDir: string): string {
return path.join(workspaceDir, SETTINGS_DIRECTORY_NAME, 'settings.json');
}
export type { DnsResolutionOrder } from './settingsSchema.js';
export enum SettingScope {
User = 'User',
Workspace = 'Workspace',
@@ -56,71 +61,6 @@ export interface AccessibilitySettings {
disableLoadingPhrases?: boolean;
}
export interface Settings {
theme?: string;
customThemes?: Record<string, CustomTheme>;
selectedAuthType?: AuthType;
sandbox?: boolean | string;
coreTools?: string[];
excludeTools?: string[];
toolDiscoveryCommand?: string;
toolCallCommand?: string;
mcpServerCommand?: string;
mcpServers?: Record<string, MCPServerConfig>;
allowMCPServers?: string[];
excludeMCPServers?: string[];
showMemoryUsage?: boolean;
contextFileName?: string | string[];
accessibility?: AccessibilitySettings;
telemetry?: TelemetrySettings;
usageStatisticsEnabled?: boolean;
preferredEditor?: string;
bugCommand?: BugCommandSettings;
checkpointing?: CheckpointingSettings;
autoConfigureMaxOldSpaceSize?: boolean;
enableOpenAILogging?: boolean;
// Git-aware file filtering settings
fileFiltering?: {
respectGitIgnore?: boolean;
respectGeminiIgnore?: boolean;
enableRecursiveFileSearch?: boolean;
};
hideWindowTitle?: boolean;
hideTips?: boolean;
hideBanner?: boolean;
// Setting for setting maximum number of user/model/tool turns in a session.
maxSessionTurns?: number;
// Setting for maximum token limit for conversation history before blocking requests
sessionTokenLimit?: number;
// Setting for maximum number of files and folders to show in folder structure
maxFolderItems?: number;
// A map of tool names to their summarization settings.
summarizeToolOutput?: Record<string, SummarizeToolOutputSettings>;
vimMode?: boolean;
// Add other settings here.
ideMode?: boolean;
memoryDiscoveryMaxDirs?: number;
sampling_params?: Record<string, unknown>;
systemPromptMappings?: Array<{
baseUrls: string[];
modelNames: string[];
template: string;
}>;
contentGenerator?: {
timeout?: number;
maxRetries?: number;
};
}
export interface SettingsError {
message: string;
path: string;
@@ -160,9 +100,13 @@ export class LoadedSettings {
const user = this.user.settings;
const workspace = this.workspace.settings;
// folderTrust is not supported at workspace level.
// eslint-disable-next-line @typescript-eslint/no-unused-vars
const { folderTrust, ...workspaceWithoutFolderTrust } = workspace;
return {
...user,
...workspace,
...workspaceWithoutFolderTrust,
...system,
customThemes: {
...(user.customThemes || {}),
@@ -174,6 +118,16 @@ export class LoadedSettings {
...(workspace.mcpServers || {}),
...(system.mcpServers || {}),
},
includeDirectories: [
...(system.includeDirectories || []),
...(user.includeDirectories || []),
...(workspace.includeDirectories || []),
],
chatCompression: {
...(system.chatCompression || {}),
...(user.chatCompression || {}),
...(workspace.chatCompression || {}),
},
};
}
@@ -295,15 +249,61 @@ export function setUpCloudShellEnvironment(envFilePath: string | null): void {
}
}
export function loadEnvironment(): void {
export function loadEnvironment(settings?: Settings): void {
const envFilePath = findEnvFile(process.cwd());
// Cloud Shell environment variable handling
if (process.env.CLOUD_SHELL === 'true') {
setUpCloudShellEnvironment(envFilePath);
}
// If no settings provided, try to load workspace settings for exclusions
let resolvedSettings = settings;
if (!resolvedSettings) {
const workspaceSettingsPath = getWorkspaceSettingsPath(process.cwd());
try {
if (fs.existsSync(workspaceSettingsPath)) {
const workspaceContent = fs.readFileSync(
workspaceSettingsPath,
'utf-8',
);
const parsedWorkspaceSettings = JSON.parse(
stripJsonComments(workspaceContent),
) as Settings;
resolvedSettings = resolveEnvVarsInObject(parsedWorkspaceSettings);
}
} catch (_e) {
// Ignore errors loading workspace settings
}
}
if (envFilePath) {
dotenv.config({ path: envFilePath, quiet: true });
// Manually parse and load environment variables to handle exclusions correctly.
// This avoids modifying environment variables that were already set from the shell.
try {
const envFileContent = fs.readFileSync(envFilePath, 'utf-8');
const parsedEnv = dotenv.parse(envFileContent);
const excludedVars =
resolvedSettings?.excludedProjectEnvVars || DEFAULT_EXCLUDED_ENV_VARS;
const isProjectEnvFile = !envFilePath.includes(GEMINI_DIR);
for (const key in parsedEnv) {
if (Object.hasOwn(parsedEnv, key)) {
// If it's a project .env file, skip loading excluded variables.
if (isProjectEnvFile && excludedVars.includes(key)) {
continue;
}
// Load variable only if it's not already set in the environment.
if (!Object.hasOwn(process.env, key)) {
process.env[key] = parsedEnv[key];
}
}
}
} catch (_e) {
// Errors are ignored to match the behavior of `dotenv.config({ quiet: true })`.
}
}
}
@@ -312,12 +312,29 @@ export function loadEnvironment(): void {
* Project settings override user settings.
*/
export function loadSettings(workspaceDir: string): LoadedSettings {
loadEnvironment();
let systemSettings: Settings = {};
let userSettings: Settings = {};
let workspaceSettings: Settings = {};
const settingsErrors: SettingsError[] = [];
const systemSettingsPath = getSystemSettingsPath();
// Resolve paths to their canonical representation to handle symlinks
const resolvedWorkspaceDir = path.resolve(workspaceDir);
const resolvedHomeDir = path.resolve(homedir());
let realWorkspaceDir = resolvedWorkspaceDir;
try {
// fs.realpathSync gets the "true" path, resolving any symlinks
realWorkspaceDir = fs.realpathSync(resolvedWorkspaceDir);
} catch (_e) {
// This is okay. The path might not exist yet, and that's a valid state.
}
// We expect homedir to always exist and be resolvable.
const realHomeDir = fs.realpathSync(resolvedHomeDir);
const workspaceSettingsPath = getWorkspaceSettingsPath(workspaceDir);
// Load system settings
try {
if (fs.existsSync(systemSettingsPath)) {
@@ -356,37 +373,34 @@ export function loadSettings(workspaceDir: string): LoadedSettings {
});
}
const workspaceSettingsPath = path.join(
workspaceDir,
SETTINGS_DIRECTORY_NAME,
'settings.json',
);
// Load workspace settings
try {
if (fs.existsSync(workspaceSettingsPath)) {
const projectContent = fs.readFileSync(workspaceSettingsPath, 'utf-8');
const parsedWorkspaceSettings = JSON.parse(
stripJsonComments(projectContent),
) as Settings;
workspaceSettings = resolveEnvVarsInObject(parsedWorkspaceSettings);
if (workspaceSettings.theme && workspaceSettings.theme === 'VS') {
workspaceSettings.theme = DefaultLight.name;
} else if (
workspaceSettings.theme &&
workspaceSettings.theme === 'VS2015'
) {
workspaceSettings.theme = DefaultDark.name;
if (realWorkspaceDir !== realHomeDir) {
// Load workspace settings
try {
if (fs.existsSync(workspaceSettingsPath)) {
const projectContent = fs.readFileSync(workspaceSettingsPath, 'utf-8');
const parsedWorkspaceSettings = JSON.parse(
stripJsonComments(projectContent),
) as Settings;
workspaceSettings = resolveEnvVarsInObject(parsedWorkspaceSettings);
if (workspaceSettings.theme && workspaceSettings.theme === 'VS') {
workspaceSettings.theme = DefaultLight.name;
} else if (
workspaceSettings.theme &&
workspaceSettings.theme === 'VS2015'
) {
workspaceSettings.theme = DefaultDark.name;
}
}
} catch (error: unknown) {
settingsErrors.push({
message: getErrorMessage(error),
path: workspaceSettingsPath,
});
}
} catch (error: unknown) {
settingsErrors.push({
message: getErrorMessage(error),
path: workspaceSettingsPath,
});
}
return new LoadedSettings(
// Create LoadedSettings first
const loadedSettings = new LoadedSettings(
{
path: systemSettingsPath,
settings: systemSettings,
@@ -401,6 +415,24 @@ export function loadSettings(workspaceDir: string): LoadedSettings {
},
settingsErrors,
);
// Validate chatCompression settings
const chatCompression = loadedSettings.merged.chatCompression;
const threshold = chatCompression?.contextPercentageThreshold;
if (
threshold != null &&
(typeof threshold !== 'number' || threshold < 0 || threshold > 1)
) {
console.warn(
`Invalid value for chatCompression.contextPercentageThreshold: "${threshold}". Please use a value between 0 and 1. Using default compression settings.`,
);
delete loadedSettings.merged.chatCompression;
}
// Load environment with merged settings
loadEnvironment(loadedSettings.merged);
return loadedSettings;
}
export function saveSettings(settingsFile: SettingsFile): void {

View File

@@ -0,0 +1,253 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import { describe, it, expect } from 'vitest';
import { SETTINGS_SCHEMA, Settings } from './settingsSchema.js';
describe('SettingsSchema', () => {
describe('SETTINGS_SCHEMA', () => {
it('should contain all expected top-level settings', () => {
const expectedSettings = [
'theme',
'customThemes',
'showMemoryUsage',
'usageStatisticsEnabled',
'autoConfigureMaxOldSpaceSize',
'preferredEditor',
'maxSessionTurns',
'memoryImportFormat',
'memoryDiscoveryMaxDirs',
'contextFileName',
'vimMode',
'ideMode',
'accessibility',
'checkpointing',
'fileFiltering',
'disableAutoUpdate',
'hideWindowTitle',
'hideTips',
'hideBanner',
'selectedAuthType',
'useExternalAuth',
'sandbox',
'coreTools',
'excludeTools',
'toolDiscoveryCommand',
'toolCallCommand',
'mcpServerCommand',
'mcpServers',
'allowMCPServers',
'excludeMCPServers',
'telemetry',
'bugCommand',
'summarizeToolOutput',
'ideModeFeature',
'dnsResolutionOrder',
'excludedProjectEnvVars',
'disableUpdateNag',
'includeDirectories',
'loadMemoryFromIncludeDirectories',
'model',
'hasSeenIdeIntegrationNudge',
'folderTrustFeature',
];
expectedSettings.forEach((setting) => {
expect(
SETTINGS_SCHEMA[setting as keyof typeof SETTINGS_SCHEMA],
).toBeDefined();
});
});
it('should have correct structure for each setting', () => {
Object.entries(SETTINGS_SCHEMA).forEach(([_key, definition]) => {
expect(definition).toHaveProperty('type');
expect(definition).toHaveProperty('label');
expect(definition).toHaveProperty('category');
expect(definition).toHaveProperty('requiresRestart');
expect(definition).toHaveProperty('default');
expect(typeof definition.type).toBe('string');
expect(typeof definition.label).toBe('string');
expect(typeof definition.category).toBe('string');
expect(typeof definition.requiresRestart).toBe('boolean');
});
});
it('should have correct nested setting structure', () => {
const nestedSettings = [
'accessibility',
'checkpointing',
'fileFiltering',
];
nestedSettings.forEach((setting) => {
const definition = SETTINGS_SCHEMA[
setting as keyof typeof SETTINGS_SCHEMA
] as (typeof SETTINGS_SCHEMA)[keyof typeof SETTINGS_SCHEMA] & {
properties: unknown;
};
expect(definition.type).toBe('object');
expect(definition.properties).toBeDefined();
expect(typeof definition.properties).toBe('object');
});
});
it('should have accessibility nested properties', () => {
expect(
SETTINGS_SCHEMA.accessibility.properties?.disableLoadingPhrases,
).toBeDefined();
expect(
SETTINGS_SCHEMA.accessibility.properties?.disableLoadingPhrases.type,
).toBe('boolean');
});
it('should have checkpointing nested properties', () => {
expect(SETTINGS_SCHEMA.checkpointing.properties?.enabled).toBeDefined();
expect(SETTINGS_SCHEMA.checkpointing.properties?.enabled.type).toBe(
'boolean',
);
});
it('should have fileFiltering nested properties', () => {
expect(
SETTINGS_SCHEMA.fileFiltering.properties?.respectGitIgnore,
).toBeDefined();
expect(
SETTINGS_SCHEMA.fileFiltering.properties?.respectGeminiIgnore,
).toBeDefined();
expect(
SETTINGS_SCHEMA.fileFiltering.properties?.enableRecursiveFileSearch,
).toBeDefined();
});
it('should have unique categories', () => {
const categories = new Set();
// Collect categories from top-level settings
Object.values(SETTINGS_SCHEMA).forEach((definition) => {
categories.add(definition.category);
// Also collect from nested properties
const defWithProps = definition as typeof definition & {
properties?: Record<string, unknown>;
};
if (defWithProps.properties) {
Object.values(defWithProps.properties).forEach(
(nestedDef: unknown) => {
const nestedDefTyped = nestedDef as { category?: string };
if (nestedDefTyped.category) {
categories.add(nestedDefTyped.category);
}
},
);
}
});
expect(categories.size).toBeGreaterThan(0);
expect(categories).toContain('General');
expect(categories).toContain('UI');
expect(categories).toContain('Mode');
expect(categories).toContain('Updates');
expect(categories).toContain('Accessibility');
expect(categories).toContain('Checkpointing');
expect(categories).toContain('File Filtering');
expect(categories).toContain('Advanced');
});
it('should have consistent default values for boolean settings', () => {
const checkBooleanDefaults = (schema: Record<string, unknown>) => {
Object.entries(schema).forEach(
([_key, definition]: [string, unknown]) => {
const def = definition as {
type?: string;
default?: unknown;
properties?: Record<string, unknown>;
};
if (def.type === 'boolean') {
// Boolean settings can have boolean or undefined defaults (for optional settings)
expect(['boolean', 'undefined']).toContain(typeof def.default);
}
if (def.properties) {
checkBooleanDefaults(def.properties);
}
},
);
};
checkBooleanDefaults(SETTINGS_SCHEMA as Record<string, unknown>);
});
it('should have showInDialog property configured', () => {
// Check that user-facing settings are marked for dialog display
expect(SETTINGS_SCHEMA.showMemoryUsage.showInDialog).toBe(true);
expect(SETTINGS_SCHEMA.vimMode.showInDialog).toBe(true);
expect(SETTINGS_SCHEMA.ideMode.showInDialog).toBe(true);
expect(SETTINGS_SCHEMA.disableAutoUpdate.showInDialog).toBe(true);
expect(SETTINGS_SCHEMA.hideWindowTitle.showInDialog).toBe(true);
expect(SETTINGS_SCHEMA.hideTips.showInDialog).toBe(true);
expect(SETTINGS_SCHEMA.hideBanner.showInDialog).toBe(true);
expect(SETTINGS_SCHEMA.usageStatisticsEnabled.showInDialog).toBe(true);
// Check that advanced settings are hidden from dialog
expect(SETTINGS_SCHEMA.selectedAuthType.showInDialog).toBe(false);
expect(SETTINGS_SCHEMA.coreTools.showInDialog).toBe(false);
expect(SETTINGS_SCHEMA.mcpServers.showInDialog).toBe(false);
expect(SETTINGS_SCHEMA.telemetry.showInDialog).toBe(false);
// Check that some settings are appropriately hidden
expect(SETTINGS_SCHEMA.theme.showInDialog).toBe(false); // Changed to false
expect(SETTINGS_SCHEMA.customThemes.showInDialog).toBe(false); // Managed via theme editor
expect(SETTINGS_SCHEMA.checkpointing.showInDialog).toBe(false); // Experimental feature
expect(SETTINGS_SCHEMA.accessibility.showInDialog).toBe(false); // Changed to false
expect(SETTINGS_SCHEMA.fileFiltering.showInDialog).toBe(false); // Changed to false
expect(SETTINGS_SCHEMA.preferredEditor.showInDialog).toBe(false); // Changed to false
expect(SETTINGS_SCHEMA.autoConfigureMaxOldSpaceSize.showInDialog).toBe(
true,
);
});
it('should infer Settings type correctly', () => {
// This test ensures that the Settings type is properly inferred from the schema
const settings: Settings = {
theme: 'dark',
includeDirectories: ['/path/to/dir'],
loadMemoryFromIncludeDirectories: true,
};
// TypeScript should not complain about these properties
expect(settings.theme).toBe('dark');
expect(settings.includeDirectories).toEqual(['/path/to/dir']);
expect(settings.loadMemoryFromIncludeDirectories).toBe(true);
});
it('should have includeDirectories setting in schema', () => {
expect(SETTINGS_SCHEMA.includeDirectories).toBeDefined();
expect(SETTINGS_SCHEMA.includeDirectories.type).toBe('array');
expect(SETTINGS_SCHEMA.includeDirectories.category).toBe('General');
expect(SETTINGS_SCHEMA.includeDirectories.default).toEqual([]);
});
it('should have loadMemoryFromIncludeDirectories setting in schema', () => {
expect(SETTINGS_SCHEMA.loadMemoryFromIncludeDirectories).toBeDefined();
expect(SETTINGS_SCHEMA.loadMemoryFromIncludeDirectories.type).toBe(
'boolean',
);
expect(SETTINGS_SCHEMA.loadMemoryFromIncludeDirectories.category).toBe(
'General',
);
expect(SETTINGS_SCHEMA.loadMemoryFromIncludeDirectories.default).toBe(
false,
);
});
it('should have folderTrustFeature setting in schema', () => {
expect(SETTINGS_SCHEMA.folderTrustFeature).toBeDefined();
expect(SETTINGS_SCHEMA.folderTrustFeature.type).toBe('boolean');
expect(SETTINGS_SCHEMA.folderTrustFeature.category).toBe('General');
expect(SETTINGS_SCHEMA.folderTrustFeature.default).toBe(false);
expect(SETTINGS_SCHEMA.folderTrustFeature.showInDialog).toBe(true);
});
});
});

View File

@@ -0,0 +1,571 @@
/**
* @license
* Copyright 2025 Google LLC
* SPDX-License-Identifier: Apache-2.0
*/
import {
MCPServerConfig,
BugCommandSettings,
TelemetrySettings,
AuthType,
ChatCompressionSettings,
} from '@qwen-code/qwen-code-core';
import { CustomTheme } from '../ui/themes/theme.js';
export interface SettingDefinition {
type: 'boolean' | 'string' | 'number' | 'array' | 'object';
label: string;
category: string;
requiresRestart: boolean;
default: boolean | string | number | string[] | object | undefined;
description?: string;
parentKey?: string;
childKey?: string;
key?: string;
properties?: SettingsSchema;
showInDialog?: boolean;
}
export interface SettingsSchema {
[key: string]: SettingDefinition;
}
export type MemoryImportFormat = 'tree' | 'flat';
export type DnsResolutionOrder = 'ipv4first' | 'verbatim';
/**
* The canonical schema for all settings.
* The structure of this object defines the structure of the `Settings` type.
* `as const` is crucial for TypeScript to infer the most specific types possible.
*/
export const SETTINGS_SCHEMA = {
// UI Settings
theme: {
type: 'string',
label: 'Theme',
category: 'UI',
requiresRestart: false,
default: undefined as string | undefined,
description: 'The color theme for the UI.',
showInDialog: false,
},
customThemes: {
type: 'object',
label: 'Custom Themes',
category: 'UI',
requiresRestart: false,
default: {} as Record<string, CustomTheme>,
description: 'Custom theme definitions.',
showInDialog: false,
},
hideWindowTitle: {
type: 'boolean',
label: 'Hide Window Title',
category: 'UI',
requiresRestart: true,
default: false,
description: 'Hide the window title bar',
showInDialog: true,
},
hideTips: {
type: 'boolean',
label: 'Hide Tips',
category: 'UI',
requiresRestart: false,
default: false,
description: 'Hide helpful tips in the UI',
showInDialog: true,
},
hideBanner: {
type: 'boolean',
label: 'Hide Banner',
category: 'UI',
requiresRestart: false,
default: false,
description: 'Hide the application banner',
showInDialog: true,
},
showMemoryUsage: {
type: 'boolean',
label: 'Show Memory Usage',
category: 'UI',
requiresRestart: false,
default: false,
description: 'Display memory usage information in the UI',
showInDialog: true,
},
usageStatisticsEnabled: {
type: 'boolean',
label: 'Enable Usage Statistics',
category: 'General',
requiresRestart: true,
default: true,
description: 'Enable collection of usage statistics',
showInDialog: true,
},
autoConfigureMaxOldSpaceSize: {
type: 'boolean',
label: 'Auto Configure Max Old Space Size',
category: 'General',
requiresRestart: true,
default: false,
description: 'Automatically configure Node.js memory limits',
showInDialog: true,
},
preferredEditor: {
type: 'string',
label: 'Preferred Editor',
category: 'General',
requiresRestart: false,
default: undefined as string | undefined,
description: 'The preferred editor to open files in.',
showInDialog: false,
},
maxSessionTurns: {
type: 'number',
label: 'Max Session Turns',
category: 'General',
requiresRestart: false,
default: undefined as number | undefined,
description:
'Maximum number of user/model/tool turns to keep in a session.',
showInDialog: false,
},
memoryImportFormat: {
type: 'string',
label: 'Memory Import Format',
category: 'General',
requiresRestart: false,
default: undefined as MemoryImportFormat | undefined,
description: 'The format to use when importing memory.',
showInDialog: false,
},
memoryDiscoveryMaxDirs: {
type: 'number',
label: 'Memory Discovery Max Dirs',
category: 'General',
requiresRestart: false,
default: undefined as number | undefined,
description: 'Maximum number of directories to search for memory.',
showInDialog: false,
},
contextFileName: {
type: 'object',
label: 'Context File Name',
category: 'General',
requiresRestart: false,
default: undefined as string | string[] | undefined,
description: 'The name of the context file.',
showInDialog: false,
},
vimMode: {
type: 'boolean',
label: 'Vim Mode',
category: 'Mode',
requiresRestart: false,
default: false,
description: 'Enable Vim keybindings',
showInDialog: true,
},
ideMode: {
type: 'boolean',
label: 'IDE Mode',
category: 'Mode',
requiresRestart: true,
default: false,
description: 'Enable IDE integration mode',
showInDialog: true,
},
accessibility: {
type: 'object',
label: 'Accessibility',
category: 'Accessibility',
requiresRestart: true,
default: {},
description: 'Accessibility settings.',
showInDialog: false,
properties: {
disableLoadingPhrases: {
type: 'boolean',
label: 'Disable Loading Phrases',
category: 'Accessibility',
requiresRestart: true,
default: false,
description: 'Disable loading phrases for accessibility',
showInDialog: true,
},
},
},
checkpointing: {
type: 'object',
label: 'Checkpointing',
category: 'Checkpointing',
requiresRestart: true,
default: {},
description: 'Session checkpointing settings.',
showInDialog: false,
properties: {
enabled: {
type: 'boolean',
label: 'Enable Checkpointing',
category: 'Checkpointing',
requiresRestart: true,
default: false,
description: 'Enable session checkpointing for recovery',
showInDialog: false,
},
},
},
fileFiltering: {
type: 'object',
label: 'File Filtering',
category: 'File Filtering',
requiresRestart: true,
default: {},
description: 'Settings for git-aware file filtering.',
showInDialog: false,
properties: {
respectGitIgnore: {
type: 'boolean',
label: 'Respect .gitignore',
category: 'File Filtering',
requiresRestart: true,
default: true,
description: 'Respect .gitignore files when searching',
showInDialog: true,
},
respectGeminiIgnore: {
type: 'boolean',
label: 'Respect .geminiignore',
category: 'File Filtering',
requiresRestart: true,
default: true,
description: 'Respect .geminiignore files when searching',
showInDialog: true,
},
enableRecursiveFileSearch: {
type: 'boolean',
label: 'Enable Recursive File Search',
category: 'File Filtering',
requiresRestart: true,
default: true,
description: 'Enable recursive file search functionality',
showInDialog: true,
},
},
},
disableAutoUpdate: {
type: 'boolean',
label: 'Disable Auto Update',
category: 'Updates',
requiresRestart: false,
default: false,
description: 'Disable automatic updates',
showInDialog: true,
},
selectedAuthType: {
type: 'string',
label: 'Selected Auth Type',
category: 'Advanced',
requiresRestart: true,
default: undefined as AuthType | undefined,
description: 'The currently selected authentication type.',
showInDialog: false,
},
useExternalAuth: {
type: 'boolean',
label: 'Use External Auth',
category: 'Advanced',
requiresRestart: true,
default: undefined as boolean | undefined,
description: 'Whether to use an external authentication flow.',
showInDialog: false,
},
sandbox: {
type: 'object',
label: 'Sandbox',
category: 'Advanced',
requiresRestart: true,
default: undefined as boolean | string | undefined,
description:
'Sandbox execution environment (can be a boolean or a path string).',
showInDialog: false,
},
coreTools: {
type: 'array',
label: 'Core Tools',
category: 'Advanced',
requiresRestart: true,
default: undefined as string[] | undefined,
description: 'Paths to core tool definitions.',
showInDialog: false,
},
excludeTools: {
type: 'array',
label: 'Exclude Tools',
category: 'Advanced',
requiresRestart: true,
default: undefined as string[] | undefined,
description: 'Tool names to exclude from discovery.',
showInDialog: false,
},
toolDiscoveryCommand: {
type: 'string',
label: 'Tool Discovery Command',
category: 'Advanced',
requiresRestart: true,
default: undefined as string | undefined,
description: 'Command to run for tool discovery.',
showInDialog: false,
},
toolCallCommand: {
type: 'string',
label: 'Tool Call Command',
category: 'Advanced',
requiresRestart: true,
default: undefined as string | undefined,
description: 'Command to run for tool calls.',
showInDialog: false,
},
mcpServerCommand: {
type: 'string',
label: 'MCP Server Command',
category: 'Advanced',
requiresRestart: true,
default: undefined as string | undefined,
description: 'Command to start an MCP server.',
showInDialog: false,
},
mcpServers: {
type: 'object',
label: 'MCP Servers',
category: 'Advanced',
requiresRestart: true,
default: {} as Record<string, MCPServerConfig>,
description: 'Configuration for MCP servers.',
showInDialog: false,
},
allowMCPServers: {
type: 'array',
label: 'Allow MCP Servers',
category: 'Advanced',
requiresRestart: true,
default: undefined as string[] | undefined,
description: 'A whitelist of MCP servers to allow.',
showInDialog: false,
},
excludeMCPServers: {
type: 'array',
label: 'Exclude MCP Servers',
category: 'Advanced',
requiresRestart: true,
default: undefined as string[] | undefined,
description: 'A blacklist of MCP servers to exclude.',
showInDialog: false,
},
telemetry: {
type: 'object',
label: 'Telemetry',
category: 'Advanced',
requiresRestart: true,
default: undefined as TelemetrySettings | undefined,
description: 'Telemetry configuration.',
showInDialog: false,
},
bugCommand: {
type: 'object',
label: 'Bug Command',
category: 'Advanced',
requiresRestart: false,
default: undefined as BugCommandSettings | undefined,
description: 'Configuration for the bug report command.',
showInDialog: false,
},
summarizeToolOutput: {
type: 'object',
label: 'Summarize Tool Output',
category: 'Advanced',
requiresRestart: false,
default: undefined as Record<string, { tokenBudget?: number }> | undefined,
description: 'Settings for summarizing tool output.',
showInDialog: false,
},
ideModeFeature: {
type: 'boolean',
label: 'IDE Mode Feature Flag',
category: 'Advanced',
requiresRestart: true,
default: undefined as boolean | undefined,
description: 'Internal feature flag for IDE mode.',
showInDialog: false,
},
dnsResolutionOrder: {
type: 'string',
label: 'DNS Resolution Order',
category: 'Advanced',
requiresRestart: true,
default: undefined as DnsResolutionOrder | undefined,
description: 'The DNS resolution order.',
showInDialog: false,
},
excludedProjectEnvVars: {
type: 'array',
label: 'Excluded Project Environment Variables',
category: 'Advanced',
requiresRestart: false,
default: ['DEBUG', 'DEBUG_MODE'] as string[],
description: 'Environment variables to exclude from project context.',
showInDialog: false,
},
disableUpdateNag: {
type: 'boolean',
label: 'Disable Update Nag',
category: 'Updates',
requiresRestart: false,
default: false,
description: 'Disable update notification prompts.',
showInDialog: false,
},
includeDirectories: {
type: 'array',
label: 'Include Directories',
category: 'General',
requiresRestart: false,
default: [] as string[],
description: 'Additional directories to include in the workspace context.',
showInDialog: false,
},
loadMemoryFromIncludeDirectories: {
type: 'boolean',
label: 'Load Memory From Include Directories',
category: 'General',
requiresRestart: false,
default: false,
description: 'Whether to load memory files from include directories.',
showInDialog: true,
},
model: {
type: 'string',
label: 'Model',
category: 'General',
requiresRestart: false,
default: undefined as string | undefined,
description: 'The Gemini model to use for conversations.',
showInDialog: false,
},
hasSeenIdeIntegrationNudge: {
type: 'boolean',
label: 'Has Seen IDE Integration Nudge',
category: 'General',
requiresRestart: false,
default: false,
description: 'Whether the user has seen the IDE integration nudge.',
showInDialog: false,
},
folderTrustFeature: {
type: 'boolean',
label: 'Folder Trust Feature',
category: 'General',
requiresRestart: false,
default: false,
description: 'Enable folder trust feature for enhanced security.',
showInDialog: true,
},
folderTrust: {
type: 'boolean',
label: 'Folder Trust',
category: 'General',
requiresRestart: false,
default: false,
description: 'Setting to track whether Folder trust is enabled.',
showInDialog: true,
},
chatCompression: {
type: 'object',
label: 'Chat Compression',
category: 'General',
requiresRestart: false,
default: undefined as ChatCompressionSettings | undefined,
description: 'Chat compression settings.',
showInDialog: false,
},
showLineNumbers: {
type: 'boolean',
label: 'Show Line Numbers',
category: 'General',
requiresRestart: false,
default: false,
description: 'Show line numbers in the chat.',
showInDialog: true,
},
contentGenerator: {
type: 'object',
label: 'Content Generator',
category: 'General',
requiresRestart: false,
default: undefined as Record<string, unknown> | undefined,
description: 'Content generator settings.',
showInDialog: false,
},
sampling_params: {
type: 'object',
label: 'Sampling Params',
category: 'General',
requiresRestart: false,
default: undefined as Record<string, unknown> | undefined,
description: 'Sampling parameters for the model.',
showInDialog: false,
},
enableOpenAILogging: {
type: 'boolean',
label: 'Enable OpenAI Logging',
category: 'General',
requiresRestart: false,
default: false,
description: 'Enable OpenAI logging.',
showInDialog: true,
},
sessionTokenLimit: {
type: 'number',
label: 'Session Token Limit',
category: 'General',
requiresRestart: false,
default: undefined as number | undefined,
description: 'The maximum number of tokens allowed in a session.',
showInDialog: false,
},
systemPromptMappings: {
type: 'object',
label: 'System Prompt Mappings',
category: 'General',
requiresRestart: false,
default: undefined as Record<string, string> | undefined,
description: 'Mappings of system prompts to model names.',
showInDialog: false,
},
tavilyApiKey: {
type: 'string',
label: 'Tavily API Key',
category: 'General',
requiresRestart: false,
default: undefined as string | undefined,
description: 'The API key for the Tavily API.',
showInDialog: false,
},
} as const;
type InferSettings<T extends SettingsSchema> = {
-readonly [K in keyof T]?: T[K] extends { properties: SettingsSchema }
? InferSettings<T[K]['properties']>
: T[K]['default'] extends boolean
? boolean
: T[K]['default'];
};
export type Settings = InferSettings<typeof SETTINGS_SCHEMA>;

View File

@@ -6,7 +6,11 @@
import stripAnsi from 'strip-ansi';
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
import { main, setupUnhandledRejectionHandler } from './gemini.js';
import {
main,
setupUnhandledRejectionHandler,
validateDnsResolutionOrder,
} from './gemini.js';
import {
LoadedSettings,
SettingsFile,
@@ -211,3 +215,38 @@ describe('gemini.tsx main function', () => {
processExitSpy.mockRestore();
});
});
describe('validateDnsResolutionOrder', () => {
let consoleWarnSpy: ReturnType<typeof vi.spyOn>;
beforeEach(() => {
consoleWarnSpy = vi.spyOn(console, 'warn').mockImplementation(() => {});
});
afterEach(() => {
consoleWarnSpy.mockRestore();
});
it('should return "ipv4first" when the input is "ipv4first"', () => {
expect(validateDnsResolutionOrder('ipv4first')).toBe('ipv4first');
expect(consoleWarnSpy).not.toHaveBeenCalled();
});
it('should return "verbatim" when the input is "verbatim"', () => {
expect(validateDnsResolutionOrder('verbatim')).toBe('verbatim');
expect(consoleWarnSpy).not.toHaveBeenCalled();
});
it('should return the default "ipv4first" when the input is undefined', () => {
expect(validateDnsResolutionOrder(undefined)).toBe('ipv4first');
expect(consoleWarnSpy).not.toHaveBeenCalled();
});
it('should return the default "ipv4first" and log a warning for an invalid string', () => {
expect(validateDnsResolutionOrder('invalid-value')).toBe('ipv4first');
expect(consoleWarnSpy).toHaveBeenCalledOnce();
expect(consoleWarnSpy).toHaveBeenCalledWith(
'Invalid value for dnsResolutionOrder in settings: "invalid-value". Using default "ipv4first".',
);
});
});

View File

@@ -7,14 +7,16 @@
import React from 'react';
import { render } from 'ink';
import { AppWrapper } from './ui/App.js';
import { loadCliConfig, parseArguments, CliArgs } from './config/config.js';
import { loadCliConfig, parseArguments } from './config/config.js';
import { readStdin } from './utils/readStdin.js';
import { basename } from 'node:path';
import v8 from 'node:v8';
import os from 'node:os';
import dns from 'node:dns';
import { spawn } from 'node:child_process';
import { start_sandbox } from './utils/sandbox.js';
import {
DnsResolutionOrder,
LoadedSettings,
loadSettings,
SettingScope,
@@ -23,24 +25,43 @@ import { themeManager } from './ui/themes/theme-manager.js';
import { getStartupWarnings } from './utils/startupWarnings.js';
import { getUserStartupWarnings } from './utils/userStartupWarnings.js';
import { runNonInteractive } from './nonInteractiveCli.js';
import { loadExtensions, Extension } from './config/extension.js';
import { loadExtensions } from './config/extension.js';
import { cleanupCheckpoints, registerCleanup } from './utils/cleanup.js';
import { getCliVersion } from './utils/version.js';
import {
ApprovalMode,
Config,
EditTool,
ShellTool,
WriteFileTool,
sessionId,
logUserPrompt,
AuthType,
getOauthClient,
logIdeConnection,
IdeConnectionEvent,
IdeConnectionType,
} from '@qwen-code/qwen-code-core';
import { validateAuthMethod } from './config/auth.js';
import { setMaxSizedBoxDebugging } from './ui/components/shared/MaxSizedBox.js';
import { validateNonInteractiveAuth } from './validateNonInterActiveAuth.js';
import { checkForUpdates } from './ui/utils/updateCheck.js';
import { handleAutoUpdate } from './utils/handleAutoUpdate.js';
import { appEvents, AppEvent } from './utils/events.js';
import { SettingsContext } from './ui/contexts/SettingsContext.js';
export function validateDnsResolutionOrder(
order: string | undefined,
): DnsResolutionOrder {
const defaultValue: DnsResolutionOrder = 'ipv4first';
if (order === undefined) {
return defaultValue;
}
if (order === 'ipv4first' || order === 'verbatim') {
return order;
}
// We don't want to throw here, just warn and use the default.
console.warn(
`Invalid value for dnsResolutionOrder in settings: "${order}". Using default "${defaultValue}".`,
);
return defaultValue;
}
function getNodeMemoryArgs(config: Config): string[] {
const totalMemoryMB = os.totalmem() / (1024 * 1024);
@@ -136,6 +157,10 @@ export async function main() {
argv,
);
dns.setDefaultResultOrder(
validateDnsResolutionOrder(settings.merged.dnsResolutionOrder),
);
if (argv.promptInteractive && !process.stdin.isTTY) {
console.error(
'Error: The --prompt-interactive flag is not supported when piping input from stdin.',
@@ -166,6 +191,11 @@ export async function main() {
await config.initialize();
if (config.getIdeMode() && config.getIdeModeFeature()) {
await config.getIdeClient().connect();
logIdeConnection(config, new IdeConnectionEvent(IdeConnectionType.START));
}
// Load custom themes from settings
themeManager.loadCustomThemes(settings.merged.customThemes);
@@ -184,7 +214,10 @@ export async function main() {
: [];
const sandboxConfig = config.getSandbox();
if (sandboxConfig) {
if (settings.merged.selectedAuthType) {
if (
settings.merged.selectedAuthType &&
!settings.merged.useExternalAuth
) {
// Validate authentication here because the sandbox will interfere with the Oauth2 web redirect.
try {
const err = validateAuthMethod(settings.merged.selectedAuthType);
@@ -197,7 +230,7 @@ export async function main() {
process.exit(1);
}
}
await start_sandbox(sandboxConfig, memoryArgs);
await start_sandbox(sandboxConfig, memoryArgs, config);
process.exit(0);
} else {
// Not in a sandbox and not entering one, so relaunch with additional
@@ -227,25 +260,35 @@ export async function main() {
...(await getUserStartupWarnings(workspaceRoot)),
];
const shouldBeInteractive =
!!argv.promptInteractive || (process.stdin.isTTY && input?.length === 0);
// Render UI, passing necessary config values. Check that there is no command line question.
if (shouldBeInteractive) {
if (config.isInteractive()) {
const version = await getCliVersion();
setWindowTitle(basename(workspaceRoot), settings);
const instance = render(
<React.StrictMode>
<AppWrapper
config={config}
settings={settings}
startupWarnings={startupWarnings}
version={version}
/>
<SettingsContext.Provider value={settings}>
<AppWrapper
config={config}
settings={settings}
startupWarnings={startupWarnings}
version={version}
/>
</SettingsContext.Provider>
</React.StrictMode>,
{ exitOnCtrlC: false },
);
checkForUpdates()
.then((info) => {
handleAutoUpdate(info, settings, config.getProjectRoot());
})
.catch((err) => {
// Silently ignore update check errors.
if (config.getDebugMode()) {
console.error('Update check failed:', err);
}
});
registerCleanup(() => instance.unmount());
return;
}
@@ -269,12 +312,10 @@ export async function main() {
prompt_length: input.length,
});
// Non-interactive mode handled by runNonInteractive
const nonInteractiveConfig = await loadNonInteractiveConfig(
const nonInteractiveConfig = await validateNonInteractiveAuth(
settings.merged.selectedAuthType,
settings.merged.useExternalAuth,
config,
extensions,
settings,
argv,
);
await runNonInteractive(nonInteractiveConfig, input, prompt_id);
@@ -295,42 +336,3 @@ function setWindowTitle(title: string, settings: LoadedSettings) {
});
}
}
async function loadNonInteractiveConfig(
config: Config,
extensions: Extension[],
settings: LoadedSettings,
argv: CliArgs,
) {
let finalConfig = config;
if (config.getApprovalMode() !== ApprovalMode.YOLO) {
// Everything is not allowed, ensure that only read-only tools are configured.
const existingExcludeTools = settings.merged.excludeTools || [];
const interactiveTools = [
ShellTool.Name,
EditTool.Name,
WriteFileTool.Name,
];
const newExcludeTools = [
...new Set([...existingExcludeTools, ...interactiveTools]),
];
const nonInteractiveSettings = {
...settings.merged,
excludeTools: newExcludeTools,
};
finalConfig = await loadCliConfig(
nonInteractiveSettings,
extensions,
config.getSessionId(),
argv,
);
await finalConfig.initialize();
}
return await validateNonInteractiveAuth(
settings.merged.selectedAuthType,
finalConfig,
);
}

View File

@@ -4,196 +4,170 @@
* SPDX-License-Identifier: Apache-2.0
*/
/* eslint-disable @typescript-eslint/no-explicit-any */
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
import {
Config,
executeToolCall,
ToolRegistry,
ToolErrorType,
shutdownTelemetry,
GeminiEventType,
ServerGeminiStreamEvent,
} from '@qwen-code/qwen-code-core';
import { Part } from '@google/genai';
import { runNonInteractive } from './nonInteractiveCli.js';
import { Config, GeminiClient, ToolRegistry } from '@qwen-code/qwen-code-core';
import { GenerateContentResponse, Part, FunctionCall } from '@google/genai';
import { vi } from 'vitest';
// Mock dependencies
vi.mock('@qwen-code/qwen-code-core', async () => {
const actualCore = await vi.importActual<
typeof import('@qwen-code/qwen-code-core')
>('@qwen-code/qwen-code-core');
// Mock core modules
vi.mock('@qwen-code/qwen-code-core', async (importOriginal) => {
const original =
await importOriginal<typeof import('@qwen-code/qwen-code-core')>();
return {
...actualCore,
GeminiClient: vi.fn(),
ToolRegistry: vi.fn(),
...original,
executeToolCall: vi.fn(),
shutdownTelemetry: vi.fn(),
isTelemetrySdkInitialized: vi.fn().mockReturnValue(true),
};
});
describe('runNonInteractive', () => {
let mockConfig: Config;
let mockGeminiClient: GeminiClient;
let mockToolRegistry: ToolRegistry;
let mockChat: {
sendMessageStream: ReturnType<typeof vi.fn>;
let mockCoreExecuteToolCall: vi.Mock;
let mockShutdownTelemetry: vi.Mock;
let consoleErrorSpy: vi.SpyInstance;
let processExitSpy: vi.SpyInstance;
let processStdoutSpy: vi.SpyInstance;
let mockGeminiClient: {
sendMessageStream: vi.Mock;
};
let mockProcessStdoutWrite: ReturnType<typeof vi.fn>;
let mockProcessExit: ReturnType<typeof vi.fn>;
beforeEach(() => {
vi.resetAllMocks();
mockChat = {
sendMessageStream: vi.fn(),
};
mockGeminiClient = {
getChat: vi.fn().mockResolvedValue(mockChat),
} as unknown as GeminiClient;
mockCoreExecuteToolCall = vi.mocked(executeToolCall);
mockShutdownTelemetry = vi.mocked(shutdownTelemetry);
consoleErrorSpy = vi.spyOn(console, 'error').mockImplementation(() => {});
processExitSpy = vi
.spyOn(process, 'exit')
.mockImplementation((() => {}) as (code?: number) => never);
processStdoutSpy = vi
.spyOn(process.stdout, 'write')
.mockImplementation(() => true);
mockToolRegistry = {
getFunctionDeclarations: vi.fn().mockReturnValue([]),
getTool: vi.fn(),
getFunctionDeclarations: vi.fn().mockReturnValue([]),
} as unknown as ToolRegistry;
vi.mocked(GeminiClient).mockImplementation(() => mockGeminiClient);
vi.mocked(ToolRegistry).mockImplementation(() => mockToolRegistry);
mockGeminiClient = {
sendMessageStream: vi.fn(),
};
mockConfig = {
getToolRegistry: vi.fn().mockReturnValue(mockToolRegistry),
initialize: vi.fn().mockResolvedValue(undefined),
getGeminiClient: vi.fn().mockReturnValue(mockGeminiClient),
getContentGeneratorConfig: vi.fn().mockReturnValue({}),
getToolRegistry: vi.fn().mockResolvedValue(mockToolRegistry),
getMaxSessionTurns: vi.fn().mockReturnValue(10),
initialize: vi.fn(),
getIdeMode: vi.fn().mockReturnValue(false),
getFullContext: vi.fn().mockReturnValue(false),
getContentGeneratorConfig: vi.fn().mockReturnValue({}),
getDebugMode: vi.fn().mockReturnValue(false),
} as unknown as Config;
mockProcessStdoutWrite = vi.fn().mockImplementation(() => true);
process.stdout.write = mockProcessStdoutWrite as any; // Use any to bypass strict signature matching for mock
mockProcessExit = vi
.fn()
.mockImplementation((_code?: number) => undefined as never);
process.exit = mockProcessExit as any; // Use any for process.exit mock
});
afterEach(() => {
vi.restoreAllMocks();
// Restore original process methods if they were globally patched
// This might require storing the original methods before patching them in beforeEach
});
async function* createStreamFromEvents(
events: ServerGeminiStreamEvent[],
): AsyncGenerator<ServerGeminiStreamEvent> {
for (const event of events) {
yield event;
}
}
it('should process input and write text output', async () => {
const inputStream = (async function* () {
yield {
candidates: [{ content: { parts: [{ text: 'Hello' }] } }],
} as GenerateContentResponse;
yield {
candidates: [{ content: { parts: [{ text: ' World' }] } }],
} as GenerateContentResponse;
})();
mockChat.sendMessageStream.mockResolvedValue(inputStream);
const events: ServerGeminiStreamEvent[] = [
{ type: GeminiEventType.Content, value: 'Hello' },
{ type: GeminiEventType.Content, value: ' World' },
];
mockGeminiClient.sendMessageStream.mockReturnValue(
createStreamFromEvents(events),
);
await runNonInteractive(mockConfig, 'Test input', 'prompt-id-1');
expect(mockChat.sendMessageStream).toHaveBeenCalledWith(
{
message: [{ text: 'Test input' }],
config: {
abortSignal: expect.any(AbortSignal),
tools: [{ functionDeclarations: [] }],
},
},
expect.any(String),
expect(mockGeminiClient.sendMessageStream).toHaveBeenCalledWith(
[{ text: 'Test input' }],
expect.any(AbortSignal),
'prompt-id-1',
);
expect(mockProcessStdoutWrite).toHaveBeenCalledWith('Hello');
expect(mockProcessStdoutWrite).toHaveBeenCalledWith(' World');
expect(mockProcessStdoutWrite).toHaveBeenCalledWith('\n');
expect(processStdoutSpy).toHaveBeenCalledWith('Hello');
expect(processStdoutSpy).toHaveBeenCalledWith(' World');
expect(processStdoutSpy).toHaveBeenCalledWith('\n');
expect(mockShutdownTelemetry).toHaveBeenCalled();
});
it('should handle a single tool call and respond', async () => {
const functionCall: FunctionCall = {
id: 'fc1',
name: 'testTool',
args: { p: 'v' },
};
const toolResponsePart: Part = {
functionResponse: {
const toolCallEvent: ServerGeminiStreamEvent = {
type: GeminiEventType.ToolCallRequest,
value: {
callId: 'tool-1',
name: 'testTool',
id: 'fc1',
response: { result: 'tool success' },
args: { arg1: 'value1' },
isClientInitiated: false,
prompt_id: 'prompt-id-2',
},
};
const toolResponse: Part[] = [{ text: 'Tool response' }];
mockCoreExecuteToolCall.mockResolvedValue({ responseParts: toolResponse });
const { executeToolCall: mockCoreExecuteToolCall } = await import(
'@qwen-code/qwen-code-core'
);
vi.mocked(mockCoreExecuteToolCall).mockResolvedValue({
callId: 'fc1',
responseParts: [toolResponsePart],
resultDisplay: 'Tool success display',
error: undefined,
});
const firstCallEvents: ServerGeminiStreamEvent[] = [toolCallEvent];
const secondCallEvents: ServerGeminiStreamEvent[] = [
{ type: GeminiEventType.Content, value: 'Final answer' },
];
const stream1 = (async function* () {
yield { functionCalls: [functionCall] } as GenerateContentResponse;
})();
const stream2 = (async function* () {
yield {
candidates: [{ content: { parts: [{ text: 'Final answer' }] } }],
} as GenerateContentResponse;
})();
mockChat.sendMessageStream
.mockResolvedValueOnce(stream1)
.mockResolvedValueOnce(stream2);
mockGeminiClient.sendMessageStream
.mockReturnValueOnce(createStreamFromEvents(firstCallEvents))
.mockReturnValueOnce(createStreamFromEvents(secondCallEvents));
await runNonInteractive(mockConfig, 'Use a tool', 'prompt-id-2');
expect(mockChat.sendMessageStream).toHaveBeenCalledTimes(2);
expect(mockGeminiClient.sendMessageStream).toHaveBeenCalledTimes(2);
expect(mockCoreExecuteToolCall).toHaveBeenCalledWith(
mockConfig,
expect.objectContaining({ callId: 'fc1', name: 'testTool' }),
expect.objectContaining({ name: 'testTool' }),
mockToolRegistry,
expect.any(AbortSignal),
);
expect(mockChat.sendMessageStream).toHaveBeenLastCalledWith(
expect.objectContaining({
message: [toolResponsePart],
}),
expect.any(String),
expect(mockGeminiClient.sendMessageStream).toHaveBeenNthCalledWith(
2,
[{ text: 'Tool response' }],
expect.any(AbortSignal),
'prompt-id-2',
);
expect(mockProcessStdoutWrite).toHaveBeenCalledWith('Final answer');
expect(processStdoutSpy).toHaveBeenCalledWith('Final answer');
expect(processStdoutSpy).toHaveBeenCalledWith('\n');
});
it('should handle error during tool execution', async () => {
const functionCall: FunctionCall = {
id: 'fcError',
name: 'errorTool',
args: {},
};
const errorResponsePart: Part = {
functionResponse: {
const toolCallEvent: ServerGeminiStreamEvent = {
type: GeminiEventType.ToolCallRequest,
value: {
callId: 'tool-1',
name: 'errorTool',
id: 'fcError',
response: { error: 'Tool failed' },
args: {},
isClientInitiated: false,
prompt_id: 'prompt-id-3',
},
};
const { executeToolCall: mockCoreExecuteToolCall } = await import(
'@qwen-code/qwen-code-core'
);
vi.mocked(mockCoreExecuteToolCall).mockResolvedValue({
callId: 'fcError',
responseParts: [errorResponsePart],
resultDisplay: 'Tool execution failed badly',
error: new Error('Tool failed'),
mockCoreExecuteToolCall.mockResolvedValue({
error: new Error('Tool execution failed badly'),
errorType: ToolErrorType.UNHANDLED_EXCEPTION,
});
const stream1 = (async function* () {
yield { functionCalls: [functionCall] } as GenerateContentResponse;
})();
const stream2 = (async function* () {
yield {
candidates: [
{ content: { parts: [{ text: 'Could not complete request.' }] } },
],
} as GenerateContentResponse;
})();
mockChat.sendMessageStream
.mockResolvedValueOnce(stream1)
.mockResolvedValueOnce(stream2);
const consoleErrorSpy = vi
.spyOn(console, 'error')
.mockImplementation(() => {});
mockGeminiClient.sendMessageStream.mockReturnValue(
createStreamFromEvents([toolCallEvent]),
);
await runNonInteractive(mockConfig, 'Trigger tool error', 'prompt-id-3');
@@ -201,75 +175,48 @@ describe('runNonInteractive', () => {
expect(consoleErrorSpy).toHaveBeenCalledWith(
'Error executing tool errorTool: Tool execution failed badly',
);
expect(mockChat.sendMessageStream).toHaveBeenLastCalledWith(
expect.objectContaining({
message: [errorResponsePart],
}),
expect.any(String),
);
expect(mockProcessStdoutWrite).toHaveBeenCalledWith(
'Could not complete request.',
);
expect(processExitSpy).toHaveBeenCalledWith(1);
});
it('should exit with error if sendMessageStream throws initially', async () => {
const apiError = new Error('API connection failed');
mockChat.sendMessageStream.mockRejectedValue(apiError);
const consoleErrorSpy = vi
.spyOn(console, 'error')
.mockImplementation(() => {});
mockGeminiClient.sendMessageStream.mockImplementation(() => {
throw apiError;
});
await runNonInteractive(mockConfig, 'Initial fail', 'prompt-id-4');
expect(consoleErrorSpy).toHaveBeenCalledWith(
'[API Error: API connection failed]',
);
expect(processExitSpy).toHaveBeenCalledWith(1);
});
it('should not exit if a tool is not found, and should send error back to model', async () => {
const functionCall: FunctionCall = {
id: 'fcNotFound',
name: 'nonexistentTool',
args: {},
};
const errorResponsePart: Part = {
functionResponse: {
const toolCallEvent: ServerGeminiStreamEvent = {
type: GeminiEventType.ToolCallRequest,
value: {
callId: 'tool-1',
name: 'nonexistentTool',
id: 'fcNotFound',
response: { error: 'Tool "nonexistentTool" not found in registry.' },
args: {},
isClientInitiated: false,
prompt_id: 'prompt-id-5',
},
};
const { executeToolCall: mockCoreExecuteToolCall } = await import(
'@qwen-code/qwen-code-core'
);
vi.mocked(mockCoreExecuteToolCall).mockResolvedValue({
callId: 'fcNotFound',
responseParts: [errorResponsePart],
resultDisplay: 'Tool "nonexistentTool" not found in registry.',
mockCoreExecuteToolCall.mockResolvedValue({
error: new Error('Tool "nonexistentTool" not found in registry.'),
resultDisplay: 'Tool "nonexistentTool" not found in registry.',
});
const finalResponse: ServerGeminiStreamEvent[] = [
{
type: GeminiEventType.Content,
value: "Sorry, I can't find that tool.",
},
];
const stream1 = (async function* () {
yield { functionCalls: [functionCall] } as GenerateContentResponse;
})();
const stream2 = (async function* () {
yield {
candidates: [
{
content: {
parts: [{ text: 'Unfortunately the tool does not exist.' }],
},
},
],
} as GenerateContentResponse;
})();
mockChat.sendMessageStream
.mockResolvedValueOnce(stream1)
.mockResolvedValueOnce(stream2);
const consoleErrorSpy = vi
.spyOn(console, 'error')
.mockImplementation(() => {});
mockGeminiClient.sendMessageStream
.mockReturnValueOnce(createStreamFromEvents([toolCallEvent]))
.mockReturnValueOnce(createStreamFromEvents(finalResponse));
await runNonInteractive(
mockConfig,
@@ -277,68 +224,22 @@ describe('runNonInteractive', () => {
'prompt-id-5',
);
expect(mockCoreExecuteToolCall).toHaveBeenCalled();
expect(consoleErrorSpy).toHaveBeenCalledWith(
'Error executing tool nonexistentTool: Tool "nonexistentTool" not found in registry.',
);
expect(mockProcessExit).not.toHaveBeenCalled();
expect(mockChat.sendMessageStream).toHaveBeenCalledTimes(2);
expect(mockChat.sendMessageStream).toHaveBeenLastCalledWith(
expect.objectContaining({
message: [errorResponsePart],
}),
expect.any(String),
);
expect(mockProcessStdoutWrite).toHaveBeenCalledWith(
'Unfortunately the tool does not exist.',
expect(processExitSpy).not.toHaveBeenCalled();
expect(mockGeminiClient.sendMessageStream).toHaveBeenCalledTimes(2);
expect(processStdoutSpy).toHaveBeenCalledWith(
"Sorry, I can't find that tool.",
);
});
it('should exit when max session turns are exceeded', async () => {
const functionCall: FunctionCall = {
id: 'fcLoop',
name: 'loopTool',
args: {},
};
const toolResponsePart: Part = {
functionResponse: {
name: 'loopTool',
id: 'fcLoop',
response: { result: 'still looping' },
},
};
// Config with a max turn of 1
vi.mocked(mockConfig.getMaxSessionTurns).mockReturnValue(1);
const { executeToolCall: mockCoreExecuteToolCall } = await import(
'@qwen-code/qwen-code-core'
);
vi.mocked(mockCoreExecuteToolCall).mockResolvedValue({
callId: 'fcLoop',
responseParts: [toolResponsePart],
resultDisplay: 'Still looping',
error: undefined,
});
const stream = (async function* () {
yield { functionCalls: [functionCall] } as GenerateContentResponse;
})();
mockChat.sendMessageStream.mockResolvedValue(stream);
const consoleErrorSpy = vi
.spyOn(console, 'error')
.mockImplementation(() => {});
await runNonInteractive(mockConfig, 'Trigger loop');
expect(mockChat.sendMessageStream).toHaveBeenCalledTimes(1);
vi.mocked(mockConfig.getMaxSessionTurns).mockReturnValue(0);
await runNonInteractive(mockConfig, 'Trigger loop', 'prompt-id-6');
expect(consoleErrorSpy).toHaveBeenCalledWith(
`
Reached max session turns for this session. Increase the number of turns by specifying maxSessionTurns in settings.json.`,
'\n Reached max session turns for this session. Increase the number of turns by specifying maxSessionTurns in settings.json.',
);
expect(mockProcessExit).not.toHaveBeenCalled();
});
});

View File

@@ -11,64 +11,46 @@ import {
ToolRegistry,
shutdownTelemetry,
isTelemetrySdkInitialized,
GeminiEventType,
ToolErrorType,
} from '@qwen-code/qwen-code-core';
import {
Content,
Part,
FunctionCall,
GenerateContentResponse,
} from '@google/genai';
import { Content, Part, FunctionCall } from '@google/genai';
import { parseAndFormatApiError } from './ui/utils/errorParsing.js';
function getResponseText(response: GenerateContentResponse): string | null {
if (response.candidates && response.candidates.length > 0) {
const candidate = response.candidates[0];
if (
candidate.content &&
candidate.content.parts &&
candidate.content.parts.length > 0
) {
// We are running in headless mode so we don't need to return thoughts to STDOUT.
const thoughtPart = candidate.content.parts[0];
if (thoughtPart?.thought) {
return null;
}
return candidate.content.parts
.filter((part) => part.text)
.map((part) => part.text)
.join('');
}
}
return null;
}
import { ConsolePatcher } from './ui/utils/ConsolePatcher.js';
export async function runNonInteractive(
config: Config,
input: string,
prompt_id: string,
): Promise<void> {
await config.initialize();
// Handle EPIPE errors when the output is piped to a command that closes early.
process.stdout.on('error', (err: NodeJS.ErrnoException) => {
if (err.code === 'EPIPE') {
// Exit gracefully if the pipe is closed.
process.exit(0);
}
const consolePatcher = new ConsolePatcher({
stderr: true,
debugMode: config.getDebugMode(),
});
const geminiClient = config.getGeminiClient();
const toolRegistry: ToolRegistry = await config.getToolRegistry();
const chat = await geminiClient.getChat();
const abortController = new AbortController();
let currentMessages: Content[] = [{ role: 'user', parts: [{ text: input }] }];
let turnCount = 0;
try {
consolePatcher.patch();
// Handle EPIPE errors when the output is piped to a command that closes early.
process.stdout.on('error', (err: NodeJS.ErrnoException) => {
if (err.code === 'EPIPE') {
// Exit gracefully if the pipe is closed.
process.exit(0);
}
});
const geminiClient = config.getGeminiClient();
const toolRegistry: ToolRegistry = await config.getToolRegistry();
const abortController = new AbortController();
let currentMessages: Content[] = [
{ role: 'user', parts: [{ text: input }] },
];
let turnCount = 0;
while (true) {
turnCount++;
if (
config.getMaxSessionTurns() > 0 &&
config.getMaxSessionTurns() >= 0 &&
turnCount > config.getMaxSessionTurns()
) {
console.error(
@@ -78,30 +60,28 @@ export async function runNonInteractive(
}
const functionCalls: FunctionCall[] = [];
const responseStream = await chat.sendMessageStream(
{
message: currentMessages[0]?.parts || [], // Ensure parts are always provided
config: {
abortSignal: abortController.signal,
tools: [
{ functionDeclarations: toolRegistry.getFunctionDeclarations() },
],
},
},
const responseStream = geminiClient.sendMessageStream(
currentMessages[0]?.parts || [],
abortController.signal,
prompt_id,
);
for await (const resp of responseStream) {
for await (const event of responseStream) {
if (abortController.signal.aborted) {
console.error('Operation cancelled.');
return;
}
const textPart = getResponseText(resp);
if (textPart) {
process.stdout.write(textPart);
}
if (resp.functionCalls) {
functionCalls.push(...resp.functionCalls);
if (event.type === GeminiEventType.Content) {
process.stdout.write(event.value);
} else if (event.type === GeminiEventType.ToolCallRequest) {
const toolCallRequest = event.value;
const fc: FunctionCall = {
name: toolCallRequest.name,
args: toolCallRequest.args,
id: toolCallRequest.callId,
};
functionCalls.push(fc);
}
}
@@ -126,15 +106,11 @@ export async function runNonInteractive(
);
if (toolResponse.error) {
const isToolNotFound = toolResponse.error.message.includes(
'not found in registry',
);
console.error(
`Error executing tool ${fc.name}: ${toolResponse.resultDisplay || toolResponse.error.message}`,
);
if (!isToolNotFound) {
if (toolResponse.errorType === ToolErrorType.UNHANDLED_EXCEPTION)
process.exit(1);
}
}
if (toolResponse.responseParts) {
@@ -165,6 +141,7 @@ export async function runNonInteractive(
);
process.exit(1);
} finally {
consolePatcher.cleanup();
if (isTelemetrySdkInitialized()) {
await shutdownTelemetry();
}

View File

@@ -16,10 +16,12 @@ import { compressCommand } from '../ui/commands/compressCommand.js';
import { copyCommand } from '../ui/commands/copyCommand.js';
import { corgiCommand } from '../ui/commands/corgiCommand.js';
import { docsCommand } from '../ui/commands/docsCommand.js';
import { directoryCommand } from '../ui/commands/directoryCommand.js';
import { editorCommand } from '../ui/commands/editorCommand.js';
import { extensionsCommand } from '../ui/commands/extensionsCommand.js';
import { helpCommand } from '../ui/commands/helpCommand.js';
import { ideCommand } from '../ui/commands/ideCommand.js';
import { initCommand } from '../ui/commands/initCommand.js';
import { mcpCommand } from '../ui/commands/mcpCommand.js';
import { memoryCommand } from '../ui/commands/memoryCommand.js';
import { privacyCommand } from '../ui/commands/privacyCommand.js';
@@ -28,7 +30,9 @@ import { restoreCommand } from '../ui/commands/restoreCommand.js';
import { statsCommand } from '../ui/commands/statsCommand.js';
import { themeCommand } from '../ui/commands/themeCommand.js';
import { toolsCommand } from '../ui/commands/toolsCommand.js';
import { settingsCommand } from '../ui/commands/settingsCommand.js';
import { vimCommand } from '../ui/commands/vimCommand.js';
import { setupGithubCommand } from '../ui/commands/setupGithubCommand.js';
/**
* Loads the core, hard-coded slash commands that are an integral part
@@ -55,19 +59,23 @@ export class BuiltinCommandLoader implements ICommandLoader {
copyCommand,
corgiCommand,
docsCommand,
directoryCommand,
editorCommand,
extensionsCommand,
helpCommand,
ideCommand(this.config),
initCommand,
mcpCommand,
memoryCommand,
privacyCommand,
mcpCommand,
quitCommand,
restoreCommand(this.config),
statsCommand,
themeCommand,
toolsCommand,
settingsCommand,
vimCommand,
setupGithubCommand,
];
return allDefinitions.filter((cmd): cmd is SlashCommand => cmd !== null);

View File

@@ -177,4 +177,176 @@ describe('CommandService', () => {
expect(loader2.loadCommands).toHaveBeenCalledTimes(1);
expect(loader2.loadCommands).toHaveBeenCalledWith(signal);
});
it('should rename extension commands when they conflict', async () => {
const builtinCommand = createMockCommand('deploy', CommandKind.BUILT_IN);
const userCommand = createMockCommand('sync', CommandKind.FILE);
const extensionCommand1 = {
...createMockCommand('deploy', CommandKind.FILE),
extensionName: 'firebase',
description: '[firebase] Deploy to Firebase',
};
const extensionCommand2 = {
...createMockCommand('sync', CommandKind.FILE),
extensionName: 'git-helper',
description: '[git-helper] Sync with remote',
};
const mockLoader1 = new MockCommandLoader([builtinCommand]);
const mockLoader2 = new MockCommandLoader([
userCommand,
extensionCommand1,
extensionCommand2,
]);
const service = await CommandService.create(
[mockLoader1, mockLoader2],
new AbortController().signal,
);
const commands = service.getCommands();
expect(commands).toHaveLength(4);
// Built-in command keeps original name
const deployBuiltin = commands.find(
(cmd) => cmd.name === 'deploy' && !cmd.extensionName,
);
expect(deployBuiltin).toBeDefined();
expect(deployBuiltin?.kind).toBe(CommandKind.BUILT_IN);
// Extension command conflicting with built-in gets renamed
const deployExtension = commands.find(
(cmd) => cmd.name === 'firebase.deploy',
);
expect(deployExtension).toBeDefined();
expect(deployExtension?.extensionName).toBe('firebase');
// User command keeps original name
const syncUser = commands.find(
(cmd) => cmd.name === 'sync' && !cmd.extensionName,
);
expect(syncUser).toBeDefined();
expect(syncUser?.kind).toBe(CommandKind.FILE);
// Extension command conflicting with user command gets renamed
const syncExtension = commands.find(
(cmd) => cmd.name === 'git-helper.sync',
);
expect(syncExtension).toBeDefined();
expect(syncExtension?.extensionName).toBe('git-helper');
});
it('should handle user/project command override correctly', async () => {
const builtinCommand = createMockCommand('help', CommandKind.BUILT_IN);
const userCommand = createMockCommand('help', CommandKind.FILE);
const projectCommand = createMockCommand('deploy', CommandKind.FILE);
const userDeployCommand = createMockCommand('deploy', CommandKind.FILE);
const mockLoader1 = new MockCommandLoader([builtinCommand]);
const mockLoader2 = new MockCommandLoader([
userCommand,
userDeployCommand,
projectCommand,
]);
const service = await CommandService.create(
[mockLoader1, mockLoader2],
new AbortController().signal,
);
const commands = service.getCommands();
expect(commands).toHaveLength(2);
// User command overrides built-in
const helpCommand = commands.find((cmd) => cmd.name === 'help');
expect(helpCommand).toBeDefined();
expect(helpCommand?.kind).toBe(CommandKind.FILE);
// Project command overrides user command (last wins)
const deployCommand = commands.find((cmd) => cmd.name === 'deploy');
expect(deployCommand).toBeDefined();
expect(deployCommand?.kind).toBe(CommandKind.FILE);
});
it('should handle secondary conflicts when renaming extension commands', async () => {
// User has both /deploy and /gcp.deploy commands
const userCommand1 = createMockCommand('deploy', CommandKind.FILE);
const userCommand2 = createMockCommand('gcp.deploy', CommandKind.FILE);
// Extension also has a deploy command that will conflict with user's /deploy
const extensionCommand = {
...createMockCommand('deploy', CommandKind.FILE),
extensionName: 'gcp',
description: '[gcp] Deploy to Google Cloud',
};
const mockLoader = new MockCommandLoader([
userCommand1,
userCommand2,
extensionCommand,
]);
const service = await CommandService.create(
[mockLoader],
new AbortController().signal,
);
const commands = service.getCommands();
expect(commands).toHaveLength(3);
// Original user command keeps its name
const deployUser = commands.find(
(cmd) => cmd.name === 'deploy' && !cmd.extensionName,
);
expect(deployUser).toBeDefined();
// User's dot notation command keeps its name
const gcpDeployUser = commands.find(
(cmd) => cmd.name === 'gcp.deploy' && !cmd.extensionName,
);
expect(gcpDeployUser).toBeDefined();
// Extension command gets renamed with suffix due to secondary conflict
const deployExtension = commands.find(
(cmd) => cmd.name === 'gcp.deploy1' && cmd.extensionName === 'gcp',
);
expect(deployExtension).toBeDefined();
expect(deployExtension?.description).toBe('[gcp] Deploy to Google Cloud');
});
it('should handle multiple secondary conflicts with incrementing suffixes', async () => {
// User has /deploy, /gcp.deploy, and /gcp.deploy1
const userCommand1 = createMockCommand('deploy', CommandKind.FILE);
const userCommand2 = createMockCommand('gcp.deploy', CommandKind.FILE);
const userCommand3 = createMockCommand('gcp.deploy1', CommandKind.FILE);
// Extension has a deploy command
const extensionCommand = {
...createMockCommand('deploy', CommandKind.FILE),
extensionName: 'gcp',
description: '[gcp] Deploy to Google Cloud',
};
const mockLoader = new MockCommandLoader([
userCommand1,
userCommand2,
userCommand3,
extensionCommand,
]);
const service = await CommandService.create(
[mockLoader],
new AbortController().signal,
);
const commands = service.getCommands();
expect(commands).toHaveLength(4);
// Extension command gets renamed with suffix 2 due to multiple conflicts
const deployExtension = commands.find(
(cmd) => cmd.name === 'gcp.deploy2' && cmd.extensionName === 'gcp',
);
expect(deployExtension).toBeDefined();
expect(deployExtension?.description).toBe('[gcp] Deploy to Google Cloud');
});
});

View File

@@ -30,13 +30,17 @@ export class CommandService {
*
* This factory method orchestrates the entire command loading process. It
* runs all provided loaders in parallel, aggregates their results, handles
* name conflicts by letting the last-loaded command win, and then returns a
* name conflicts for extension commands by renaming them, and then returns a
* fully constructed `CommandService` instance.
*
* Conflict resolution:
* - Extension commands that conflict with existing commands are renamed to
* `extensionName.commandName`
* - Non-extension commands (built-in, user, project) override earlier commands
* with the same name based on loader order
*
* @param loaders An array of objects that conform to the `ICommandLoader`
* interface. The order of loaders is significant: if multiple loaders
* provide a command with the same name, the command from the loader that
* appears later in the array will take precedence.
* interface. Built-in commands should come first, followed by FileCommandLoader.
* @param signal An AbortSignal to cancel the loading process.
* @returns A promise that resolves to a new, fully initialized `CommandService` instance.
*/
@@ -57,12 +61,28 @@ export class CommandService {
}
}
// De-duplicate commands using a Map. The last one found with a given name wins.
// This creates a natural override system based on the order of the loaders
// passed to the constructor.
const commandMap = new Map<string, SlashCommand>();
for (const cmd of allCommands) {
commandMap.set(cmd.name, cmd);
let finalName = cmd.name;
// Extension commands get renamed if they conflict with existing commands
if (cmd.extensionName && commandMap.has(cmd.name)) {
let renamedName = `${cmd.extensionName}.${cmd.name}`;
let suffix = 1;
// Keep trying until we find a name that doesn't conflict
while (commandMap.has(renamedName)) {
renamedName = `${cmd.extensionName}.${cmd.name}${suffix}`;
suffix++;
}
finalName = renamedName;
}
commandMap.set(finalName, {
...cmd,
name: finalName,
});
}
const finalCommands = Object.freeze(Array.from(commandMap.values()));

Some files were not shown because too many files have changed in this diff Show More