mirror of
https://github.com/QwenLM/qwen-code.git
synced 2026-01-13 12:29:14 +00:00
Compare commits
1 Commits
fix/1454-s
...
feature/re
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b6a482e090 |
@@ -25,7 +25,7 @@ Qwen Code is an open-source AI agent for the terminal, optimized for [Qwen3-Code
|
||||
- **OpenAI-compatible, OAuth free tier**: use an OpenAI-compatible API, or sign in with Qwen OAuth to get 2,000 free requests/day.
|
||||
- **Open-source, co-evolving**: both the framework and the Qwen3-Coder model are open-source—and they ship and evolve together.
|
||||
- **Agentic workflow, feature-rich**: rich built-in tools (Skills, SubAgents, Plan Mode) for a full agentic workflow and a Claude Code-like experience.
|
||||
- **Terminal-first, IDE-friendly**: built for developers who live in the command line, with optional integration for VS Code, Zed, and JetBrains IDEs.
|
||||
- **Terminal-first, IDE-friendly**: built for developers who live in the command line, with optional integration for VS Code and Zed.
|
||||
|
||||
## Installation
|
||||
|
||||
@@ -137,11 +137,10 @@ Use `-p` to run Qwen Code without the interactive UI—ideal for scripts, automa
|
||||
|
||||
#### IDE integration
|
||||
|
||||
Use Qwen Code inside your editor (VS Code, Zed, and JetBrains IDEs):
|
||||
Use Qwen Code inside your editor (VS Code and Zed):
|
||||
|
||||
- [Use in VS Code](https://qwenlm.github.io/qwen-code-docs/en/users/integration-vscode/)
|
||||
- [Use in Zed](https://qwenlm.github.io/qwen-code-docs/en/users/integration-zed/)
|
||||
- [Use in JetBrains IDEs](https://qwenlm.github.io/qwen-code-docs/en/users/integration-jetbrains/)
|
||||
|
||||
#### TypeScript SDK
|
||||
|
||||
|
||||
@@ -12,7 +12,6 @@ export default {
|
||||
},
|
||||
'integration-vscode': 'Visual Studio Code',
|
||||
'integration-zed': 'Zed IDE',
|
||||
'integration-jetbrains': 'JetBrains IDEs',
|
||||
'integration-github-action': 'Github Actions',
|
||||
'Code with Qwen Code': {
|
||||
type: 'separator',
|
||||
|
||||
@@ -59,7 +59,6 @@ Commands for managing AI tools and models.
|
||||
| ---------------- | --------------------------------------------- | --------------------------------------------- |
|
||||
| `/mcp` | List configured MCP servers and tools | `/mcp`, `/mcp desc` |
|
||||
| `/tools` | Display currently available tool list | `/tools`, `/tools desc` |
|
||||
| `/skills` | List and run available skills (experimental) | `/skills`, `/skills <name>` |
|
||||
| `/approval-mode` | Change approval mode for tool usage | `/approval-mode <mode (auto-edit)> --project` |
|
||||
| →`plan` | Analysis only, no execution | Secure review |
|
||||
| →`default` | Require approval for edits | Daily use |
|
||||
|
||||
@@ -27,14 +27,6 @@ Agent Skills package expertise into discoverable capabilities. Each Skill consis
|
||||
|
||||
Skills are **model-invoked** — the model autonomously decides when to use them based on your request and the Skill’s description. This is different from slash commands, which are **user-invoked** (you explicitly type `/command`).
|
||||
|
||||
If you want to invoke a Skill explicitly, use the `/skills` slash command:
|
||||
|
||||
```bash
|
||||
/skills <skill-name>
|
||||
```
|
||||
|
||||
The `/skills` command is only available when you run with `--experimental-skills`. Use autocomplete to browse available Skills and descriptions.
|
||||
|
||||
### Benefits
|
||||
|
||||
- Extend Qwen Code for your workflows
|
||||
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 36 KiB |
@@ -1,57 +0,0 @@
|
||||
# JetBrains IDEs
|
||||
|
||||
> JetBrains IDEs provide native support for AI coding assistants through the Agent Control Protocol (ACP). This integration allows you to use Qwen Code directly within your JetBrains IDE with real-time code suggestions.
|
||||
|
||||
### Features
|
||||
|
||||
- **Native agent experience**: Integrated AI assistant panel within your JetBrains IDE
|
||||
- **Agent Control Protocol**: Full support for ACP enabling advanced IDE interactions
|
||||
- **Symbol management**: #-mention files to add them to the conversation context
|
||||
- **Conversation history**: Access to past conversations within the IDE
|
||||
|
||||
### Requirements
|
||||
|
||||
- JetBrains IDE with ACP support (IntelliJ IDEA, WebStorm, PyCharm, etc.)
|
||||
- Qwen Code CLI installed
|
||||
|
||||
### Installation
|
||||
|
||||
1. Install Qwen Code CLI:
|
||||
|
||||
```bash
|
||||
npm install -g @qwen-code/qwen-code
|
||||
```
|
||||
|
||||
2. Open your JetBrains IDE and navigate to AI Chat tool window.
|
||||
|
||||
3. Click the 3-dot menu in the upper-right corner and select **Configure ACP Agent** and configure Qwen Code with the following settings:
|
||||
|
||||
```json
|
||||
{
|
||||
"agent_servers": {
|
||||
"qwen": {
|
||||
"command": "/path/to/qwen",
|
||||
"args": ["--acp"],
|
||||
"env": {}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
4. The Qwen Code agent should now be available in the AI Assistant panel
|
||||
|
||||

|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Agent not appearing
|
||||
|
||||
- Run `qwen --version` in terminal to verify installation
|
||||
- Ensure your JetBrains IDE version supports ACP
|
||||
- Restart your JetBrains IDE
|
||||
|
||||
### Qwen Code not responding
|
||||
|
||||
- Check your internet connection
|
||||
- Verify CLI works by running `qwen` in terminal
|
||||
- [File an issue on GitHub](https://github.com/qwenlm/qwen-code/issues) if the problem persists
|
||||
7
package-lock.json
generated
7
package-lock.json
generated
@@ -6216,7 +6216,10 @@
|
||||
"version": "4.0.3",
|
||||
"resolved": "https://registry.npmjs.org/chokidar/-/chokidar-4.0.3.tgz",
|
||||
"integrity": "sha512-Qgzu8kfBvo+cA4962jnP1KkS6Dop5NS6g7R5LFYJr4b8Ub94PPQXUksCw9PvXoeXPRRddRNC5C1JQUR2SMGtnA==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"readdirp": "^4.0.1"
|
||||
},
|
||||
@@ -13879,7 +13882,10 @@
|
||||
"version": "4.1.2",
|
||||
"resolved": "https://registry.npmjs.org/readdirp/-/readdirp-4.1.2.tgz",
|
||||
"integrity": "sha512-GDhwkLfywWL2s6vEjyhri+eXmfH6j1L7JE27WhqLeYzoh/A3DBaYGEj2H/HFZCn/kMfim73FXxEJTw06WtxQwg==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"optional": true,
|
||||
"peer": true,
|
||||
"engines": {
|
||||
"node": ">= 14.18.0"
|
||||
},
|
||||
@@ -17968,7 +17974,6 @@
|
||||
"ajv-formats": "^3.0.0",
|
||||
"async-mutex": "^0.5.0",
|
||||
"chardet": "^2.1.0",
|
||||
"chokidar": "^4.0.3",
|
||||
"diff": "^7.0.0",
|
||||
"dotenv": "^17.1.0",
|
||||
"fast-levenshtein": "^2.0.6",
|
||||
|
||||
@@ -23,6 +23,8 @@
|
||||
"build-and-start": "npm run build && npm run start",
|
||||
"build:vscode": "node scripts/build_vscode_companion.js",
|
||||
"build:all": "npm run build && npm run build:sandbox && npm run build:vscode",
|
||||
"build:native": "node scripts/build_native.js",
|
||||
"build:native:all": "node scripts/build_native.js --all",
|
||||
"build:packages": "npm run build --workspaces",
|
||||
"build:sandbox": "node scripts/build_sandbox.js",
|
||||
"bundle": "npm run generate && node esbuild.config.js && node scripts/copy_bundle_assets.js",
|
||||
|
||||
@@ -170,17 +170,7 @@ function normalizeOutputFormat(
|
||||
}
|
||||
|
||||
export async function parseArguments(settings: Settings): Promise<CliArgs> {
|
||||
let rawArgv = hideBin(process.argv);
|
||||
|
||||
// hack: if the first argument is the CLI entry point, remove it
|
||||
if (
|
||||
rawArgv.length > 0 &&
|
||||
(rawArgv[0].endsWith('/dist/qwen-cli/cli.js') ||
|
||||
rawArgv[0].endsWith('/dist/cli.js'))
|
||||
) {
|
||||
rawArgv = rawArgv.slice(1);
|
||||
}
|
||||
|
||||
const rawArgv = hideBin(process.argv);
|
||||
const yargsInstance = yargs(rawArgv)
|
||||
.locale('en')
|
||||
.scriptName('qwen')
|
||||
|
||||
@@ -346,7 +346,6 @@ export async function main() {
|
||||
extensionEnablementManager,
|
||||
argv,
|
||||
);
|
||||
registerCleanup(() => config.shutdown());
|
||||
|
||||
if (config.getListExtensions()) {
|
||||
console.log('Installed extensions:');
|
||||
|
||||
@@ -31,7 +31,6 @@ import { quitCommand } from '../ui/commands/quitCommand.js';
|
||||
import { restoreCommand } from '../ui/commands/restoreCommand.js';
|
||||
import { resumeCommand } from '../ui/commands/resumeCommand.js';
|
||||
import { settingsCommand } from '../ui/commands/settingsCommand.js';
|
||||
import { skillsCommand } from '../ui/commands/skillsCommand.js';
|
||||
import { statsCommand } from '../ui/commands/statsCommand.js';
|
||||
import { summaryCommand } from '../ui/commands/summaryCommand.js';
|
||||
import { terminalSetupCommand } from '../ui/commands/terminalSetupCommand.js';
|
||||
@@ -79,7 +78,6 @@ export class BuiltinCommandLoader implements ICommandLoader {
|
||||
quitCommand,
|
||||
restoreCommand(this.config),
|
||||
resumeCommand,
|
||||
...(this.config?.getExperimentalSkills?.() ? [skillsCommand] : []),
|
||||
statsCommand,
|
||||
summaryCommand,
|
||||
themeCommand,
|
||||
|
||||
@@ -1,132 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import {
|
||||
CommandKind,
|
||||
type CommandCompletionItem,
|
||||
type CommandContext,
|
||||
type SlashCommand,
|
||||
} from './types.js';
|
||||
import { MessageType, type HistoryItemSkillsList } from '../types.js';
|
||||
import { t } from '../../i18n/index.js';
|
||||
import { AsyncFzf } from 'fzf';
|
||||
import type { SkillConfig } from '@qwen-code/qwen-code-core';
|
||||
|
||||
export const skillsCommand: SlashCommand = {
|
||||
name: 'skills',
|
||||
get description() {
|
||||
return t('List available skills.');
|
||||
},
|
||||
kind: CommandKind.BUILT_IN,
|
||||
action: async (context: CommandContext, args?: string) => {
|
||||
const rawArgs = args?.trim() ?? '';
|
||||
const [skillName = ''] = rawArgs.split(/\s+/);
|
||||
|
||||
const skillManager = context.services.config?.getSkillManager();
|
||||
if (!skillManager) {
|
||||
context.ui.addItem(
|
||||
{
|
||||
type: MessageType.ERROR,
|
||||
text: t('Could not retrieve skill manager.'),
|
||||
},
|
||||
Date.now(),
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
const skills = await skillManager.listSkills();
|
||||
if (skills.length === 0) {
|
||||
context.ui.addItem(
|
||||
{
|
||||
type: MessageType.INFO,
|
||||
text: t('No skills are currently available.'),
|
||||
},
|
||||
Date.now(),
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
if (!skillName) {
|
||||
const sortedSkills = [...skills].sort((left, right) =>
|
||||
left.name.localeCompare(right.name),
|
||||
);
|
||||
const skillsListItem: HistoryItemSkillsList = {
|
||||
type: MessageType.SKILLS_LIST,
|
||||
skills: sortedSkills.map((skill) => ({ name: skill.name })),
|
||||
};
|
||||
context.ui.addItem(skillsListItem, Date.now());
|
||||
return;
|
||||
}
|
||||
const normalizedName = skillName.toLowerCase();
|
||||
const hasSkill = skills.some(
|
||||
(skill) => skill.name.toLowerCase() === normalizedName,
|
||||
);
|
||||
|
||||
if (!hasSkill) {
|
||||
context.ui.addItem(
|
||||
{
|
||||
type: MessageType.ERROR,
|
||||
text: t('Unknown skill: {{name}}', { name: skillName }),
|
||||
},
|
||||
Date.now(),
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
const rawInput = context.invocation?.raw ?? `/skills ${rawArgs}`;
|
||||
return {
|
||||
type: 'submit_prompt',
|
||||
content: [{ text: rawInput }],
|
||||
};
|
||||
},
|
||||
completion: async (
|
||||
context: CommandContext,
|
||||
partialArg: string,
|
||||
): Promise<CommandCompletionItem[]> => {
|
||||
const skillManager = context.services.config?.getSkillManager();
|
||||
if (!skillManager) {
|
||||
return [];
|
||||
}
|
||||
|
||||
const skills = await skillManager.listSkills();
|
||||
const normalizedPartial = partialArg.trim();
|
||||
const matches = await getSkillMatches(skills, normalizedPartial);
|
||||
|
||||
return matches.map((skill) => ({
|
||||
value: skill.name,
|
||||
description: skill.description,
|
||||
}));
|
||||
},
|
||||
};
|
||||
|
||||
async function getSkillMatches(
|
||||
skills: SkillConfig[],
|
||||
query: string,
|
||||
): Promise<SkillConfig[]> {
|
||||
if (!query) {
|
||||
return skills;
|
||||
}
|
||||
|
||||
const names = skills.map((skill) => skill.name);
|
||||
const skillMap = new Map(skills.map((skill) => [skill.name, skill]));
|
||||
|
||||
try {
|
||||
const fzf = new AsyncFzf(names, {
|
||||
fuzzy: 'v2',
|
||||
casing: 'case-insensitive',
|
||||
});
|
||||
const results = (await fzf.find(query)) as Array<{ item: string }>;
|
||||
return results
|
||||
.map((result) => skillMap.get(result.item))
|
||||
.filter((skill): skill is SkillConfig => !!skill);
|
||||
} catch (error) {
|
||||
console.error('[skillsCommand] Fuzzy match failed:', error);
|
||||
const lowerQuery = query.toLowerCase();
|
||||
return skills.filter((skill) =>
|
||||
skill.name.toLowerCase().startsWith(lowerQuery),
|
||||
);
|
||||
}
|
||||
}
|
||||
@@ -209,12 +209,6 @@ export enum CommandKind {
|
||||
MCP_PROMPT = 'mcp-prompt',
|
||||
}
|
||||
|
||||
export interface CommandCompletionItem {
|
||||
value: string;
|
||||
label?: string;
|
||||
description?: string;
|
||||
}
|
||||
|
||||
// The standardized contract for any command in the system.
|
||||
export interface SlashCommand {
|
||||
name: string;
|
||||
@@ -240,7 +234,7 @@ export interface SlashCommand {
|
||||
completion?: (
|
||||
context: CommandContext,
|
||||
partialArg: string,
|
||||
) => Promise<Array<string | CommandCompletionItem> | null>;
|
||||
) => Promise<string[]>;
|
||||
|
||||
subCommands?: SlashCommand[];
|
||||
}
|
||||
|
||||
@@ -30,7 +30,6 @@ import { Help } from './Help.js';
|
||||
import type { SlashCommand } from '../commands/types.js';
|
||||
import { ExtensionsList } from './views/ExtensionsList.js';
|
||||
import { getMCPServerStatus } from '@qwen-code/qwen-code-core';
|
||||
import { SkillsList } from './views/SkillsList.js';
|
||||
import { ToolsList } from './views/ToolsList.js';
|
||||
import { McpStatus } from './views/McpStatus.js';
|
||||
|
||||
@@ -154,9 +153,6 @@ const HistoryItemDisplayComponent: React.FC<HistoryItemDisplayProps> = ({
|
||||
showDescriptions={itemForDisplay.showDescriptions}
|
||||
/>
|
||||
)}
|
||||
{itemForDisplay.type === 'skills_list' && (
|
||||
<SkillsList skills={itemForDisplay.skills} />
|
||||
)}
|
||||
{itemForDisplay.type === 'mcp_status' && (
|
||||
<McpStatus {...itemForDisplay} serverStatus={getMCPServerStatus} />
|
||||
)}
|
||||
|
||||
@@ -106,7 +106,7 @@ export function SuggestionsDisplay({
|
||||
</Box>
|
||||
|
||||
{suggestion.description && (
|
||||
<Box flexGrow={1} paddingLeft={2}>
|
||||
<Box flexGrow={1} paddingLeft={3}>
|
||||
<Text color={textColor} wrap="truncate">
|
||||
{suggestion.description}
|
||||
</Text>
|
||||
|
||||
@@ -23,7 +23,7 @@ export const InfoMessage: React.FC<InfoMessageProps> = ({ text }) => {
|
||||
const prefixWidth = prefix.length;
|
||||
|
||||
return (
|
||||
<Box flexDirection="row" marginBottom={1}>
|
||||
<Box flexDirection="row" marginTop={1}>
|
||||
<Box width={prefixWidth}>
|
||||
<Text color={theme.status.warning}>{prefix}</Text>
|
||||
</Box>
|
||||
|
||||
@@ -18,7 +18,7 @@ export const WarningMessage: React.FC<WarningMessageProps> = ({ text }) => {
|
||||
const prefixWidth = 3;
|
||||
|
||||
return (
|
||||
<Box flexDirection="row" marginBottom={1}>
|
||||
<Box flexDirection="row" marginTop={1}>
|
||||
<Box width={prefixWidth}>
|
||||
<Text color={Colors.AccentYellow}>{prefix}</Text>
|
||||
</Box>
|
||||
|
||||
@@ -1,36 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import type React from 'react';
|
||||
import { Box, Text } from 'ink';
|
||||
import { theme } from '../../semantic-colors.js';
|
||||
import { type SkillDefinition } from '../../types.js';
|
||||
import { t } from '../../../i18n/index.js';
|
||||
|
||||
interface SkillsListProps {
|
||||
skills: readonly SkillDefinition[];
|
||||
}
|
||||
|
||||
export const SkillsList: React.FC<SkillsListProps> = ({ skills }) => (
|
||||
<Box flexDirection="column" marginBottom={1}>
|
||||
<Text bold color={theme.text.primary}>
|
||||
{t('Available skills:')}
|
||||
</Text>
|
||||
<Box height={1} />
|
||||
{skills.length > 0 ? (
|
||||
skills.map((skill) => (
|
||||
<Box key={skill.name} flexDirection="row">
|
||||
<Text color={theme.text.primary}>{' '}- </Text>
|
||||
<Text bold color={theme.text.accent}>
|
||||
{skill.name}
|
||||
</Text>
|
||||
</Box>
|
||||
))
|
||||
) : (
|
||||
<Text color={theme.text.primary}> {t('No skills available')}</Text>
|
||||
)}
|
||||
</Box>
|
||||
);
|
||||
@@ -573,45 +573,6 @@ describe('useSlashCompletion', () => {
|
||||
});
|
||||
});
|
||||
|
||||
it('should map completion items with descriptions for argument suggestions', async () => {
|
||||
const mockCompletionFn = vi.fn().mockResolvedValue([
|
||||
{ value: 'pdf', description: 'Create PDF documents' },
|
||||
{ value: 'xlsx', description: 'Work with spreadsheets' },
|
||||
]);
|
||||
|
||||
const slashCommands = [
|
||||
createTestCommand({
|
||||
name: 'skills',
|
||||
description: 'List available skills',
|
||||
completion: mockCompletionFn,
|
||||
}),
|
||||
];
|
||||
|
||||
const { result } = renderHook(() =>
|
||||
useTestHarnessForSlashCompletion(
|
||||
true,
|
||||
'/skills ',
|
||||
slashCommands,
|
||||
mockCommandContext,
|
||||
),
|
||||
);
|
||||
|
||||
await waitFor(() => {
|
||||
expect(result.current.suggestions).toEqual([
|
||||
{
|
||||
label: 'pdf',
|
||||
value: 'pdf',
|
||||
description: 'Create PDF documents',
|
||||
},
|
||||
{
|
||||
label: 'xlsx',
|
||||
value: 'xlsx',
|
||||
description: 'Work with spreadsheets',
|
||||
},
|
||||
]);
|
||||
});
|
||||
});
|
||||
|
||||
it('should call command.completion with an empty string when args start with a space', async () => {
|
||||
const mockCompletionFn = vi
|
||||
.fn()
|
||||
|
||||
@@ -9,7 +9,6 @@ import { AsyncFzf } from 'fzf';
|
||||
import type { Suggestion } from '../components/SuggestionsDisplay.js';
|
||||
import {
|
||||
CommandKind,
|
||||
type CommandCompletionItem,
|
||||
type CommandContext,
|
||||
type SlashCommand,
|
||||
} from '../commands/types.js';
|
||||
@@ -216,9 +215,10 @@ function useCommandSuggestions(
|
||||
)) || [];
|
||||
|
||||
if (!signal.aborted) {
|
||||
const finalSuggestions = results
|
||||
.map((item) => toSuggestion(item))
|
||||
.filter((suggestion): suggestion is Suggestion => !!suggestion);
|
||||
const finalSuggestions = results.map((s) => ({
|
||||
label: s,
|
||||
value: s,
|
||||
}));
|
||||
setSuggestions(finalSuggestions);
|
||||
setIsLoading(false);
|
||||
}
|
||||
@@ -310,20 +310,6 @@ function useCommandSuggestions(
|
||||
return { suggestions, isLoading };
|
||||
}
|
||||
|
||||
function toSuggestion(item: string | CommandCompletionItem): Suggestion | null {
|
||||
if (typeof item === 'string') {
|
||||
return { label: item, value: item };
|
||||
}
|
||||
if (!item.value) {
|
||||
return null;
|
||||
}
|
||||
return {
|
||||
label: item.label ?? item.value,
|
||||
value: item.value,
|
||||
description: item.description,
|
||||
};
|
||||
}
|
||||
|
||||
function useCompletionPositions(
|
||||
query: string | null,
|
||||
parserResult: CommandParserResult,
|
||||
|
||||
@@ -201,21 +201,12 @@ export interface ToolDefinition {
|
||||
description?: string;
|
||||
}
|
||||
|
||||
export interface SkillDefinition {
|
||||
name: string;
|
||||
}
|
||||
|
||||
export type HistoryItemToolsList = HistoryItemBase & {
|
||||
type: 'tools_list';
|
||||
tools: ToolDefinition[];
|
||||
showDescriptions: boolean;
|
||||
};
|
||||
|
||||
export type HistoryItemSkillsList = HistoryItemBase & {
|
||||
type: 'skills_list';
|
||||
skills: SkillDefinition[];
|
||||
};
|
||||
|
||||
// JSON-friendly types for using as a simple data model showing info about an
|
||||
// MCP Server.
|
||||
export interface JsonMcpTool {
|
||||
@@ -277,7 +268,6 @@ export type HistoryItemWithoutId =
|
||||
| HistoryItemCompression
|
||||
| HistoryItemExtensionsList
|
||||
| HistoryItemToolsList
|
||||
| HistoryItemSkillsList
|
||||
| HistoryItemMcpStatus;
|
||||
|
||||
export type HistoryItem = HistoryItemWithoutId & { id: number };
|
||||
@@ -299,7 +289,6 @@ export enum MessageType {
|
||||
SUMMARY = 'summary',
|
||||
EXTENSIONS_LIST = 'extensions_list',
|
||||
TOOLS_LIST = 'tools_list',
|
||||
SKILLS_LIST = 'skills_list',
|
||||
MCP_STATUS = 'mcp_status',
|
||||
}
|
||||
|
||||
|
||||
@@ -27,6 +27,7 @@
|
||||
"@google/genai": "1.30.0",
|
||||
"@modelcontextprotocol/sdk": "^1.25.1",
|
||||
"@opentelemetry/api": "^1.9.0",
|
||||
"async-mutex": "^0.5.0",
|
||||
"@opentelemetry/exporter-logs-otlp-grpc": "^0.203.0",
|
||||
"@opentelemetry/exporter-logs-otlp-http": "^0.203.0",
|
||||
"@opentelemetry/exporter-metrics-otlp-grpc": "^0.203.0",
|
||||
@@ -39,9 +40,7 @@
|
||||
"@xterm/headless": "5.5.0",
|
||||
"ajv": "^8.17.1",
|
||||
"ajv-formats": "^3.0.0",
|
||||
"async-mutex": "^0.5.0",
|
||||
"chardet": "^2.1.0",
|
||||
"chokidar": "^4.0.3",
|
||||
"diff": "^7.0.0",
|
||||
"dotenv": "^17.1.0",
|
||||
"fast-levenshtein": "^2.0.6",
|
||||
|
||||
@@ -673,7 +673,6 @@ export class Config {
|
||||
this.promptRegistry = new PromptRegistry();
|
||||
this.subagentManager = new SubagentManager(this);
|
||||
this.skillManager = new SkillManager(this);
|
||||
await this.skillManager.startWatching();
|
||||
|
||||
// Load session subagents if they were provided before initialization
|
||||
if (this.sessionSubagents.length > 0) {
|
||||
@@ -774,13 +773,6 @@ export class Config {
|
||||
return this.sessionId;
|
||||
}
|
||||
|
||||
/**
|
||||
* Releases resources owned by the config instance.
|
||||
*/
|
||||
async shutdown(): Promise<void> {
|
||||
this.skillManager?.stopWatching();
|
||||
}
|
||||
|
||||
/**
|
||||
* Starts a new session and resets session-scoped services.
|
||||
*/
|
||||
|
||||
@@ -5,10 +5,8 @@
|
||||
*/
|
||||
|
||||
import * as fs from 'fs/promises';
|
||||
import * as fsSync from 'fs';
|
||||
import * as path from 'path';
|
||||
import * as os from 'os';
|
||||
import { watch as watchFs, type FSWatcher } from 'chokidar';
|
||||
import { parse as parseYaml } from '../utils/yaml-parser.js';
|
||||
import type {
|
||||
SkillConfig,
|
||||
@@ -31,9 +29,6 @@ export class SkillManager {
|
||||
private skillsCache: Map<SkillLevel, SkillConfig[]> | null = null;
|
||||
private readonly changeListeners: Set<() => void> = new Set();
|
||||
private parseErrors: Map<string, SkillError> = new Map();
|
||||
private readonly watchers: Map<string, FSWatcher> = new Map();
|
||||
private watchStarted = false;
|
||||
private refreshTimer: NodeJS.Timeout | null = null;
|
||||
|
||||
constructor(private readonly config: Config) {}
|
||||
|
||||
@@ -226,36 +221,6 @@ export class SkillManager {
|
||||
this.notifyChangeListeners();
|
||||
}
|
||||
|
||||
/**
|
||||
* Starts watching skill directories for changes.
|
||||
*/
|
||||
async startWatching(): Promise<void> {
|
||||
if (this.watchStarted) {
|
||||
return;
|
||||
}
|
||||
|
||||
this.watchStarted = true;
|
||||
await this.refreshCache();
|
||||
this.updateWatchersFromCache();
|
||||
}
|
||||
|
||||
/**
|
||||
* Stops watching skill directories for changes.
|
||||
*/
|
||||
stopWatching(): void {
|
||||
for (const watcher of this.watchers.values()) {
|
||||
void watcher.close().catch((error) => {
|
||||
console.warn('Failed to close skills watcher:', error);
|
||||
});
|
||||
}
|
||||
this.watchers.clear();
|
||||
this.watchStarted = false;
|
||||
if (this.refreshTimer) {
|
||||
clearTimeout(this.refreshTimer);
|
||||
this.refreshTimer = null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Parses a SKILL.md file and returns the configuration.
|
||||
*
|
||||
@@ -484,77 +449,4 @@ export class SkillManager {
|
||||
this.skillsCache.set(level, levelSkills);
|
||||
}
|
||||
}
|
||||
|
||||
private updateWatchersFromCache(): void {
|
||||
const desiredPaths = new Set<string>();
|
||||
|
||||
for (const level of ['project', 'user'] as const) {
|
||||
const baseDir = this.getSkillsBaseDir(level);
|
||||
const parentDir = path.dirname(baseDir);
|
||||
if (fsSync.existsSync(parentDir)) {
|
||||
desiredPaths.add(parentDir);
|
||||
}
|
||||
if (fsSync.existsSync(baseDir)) {
|
||||
desiredPaths.add(baseDir);
|
||||
}
|
||||
|
||||
const levelSkills = this.skillsCache?.get(level) || [];
|
||||
for (const skill of levelSkills) {
|
||||
const skillDir = path.dirname(skill.filePath);
|
||||
if (fsSync.existsSync(skillDir)) {
|
||||
desiredPaths.add(skillDir);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for (const existingPath of this.watchers.keys()) {
|
||||
if (!desiredPaths.has(existingPath)) {
|
||||
void this.watchers
|
||||
.get(existingPath)
|
||||
?.close()
|
||||
.catch((error) => {
|
||||
console.warn(
|
||||
`Failed to close skills watcher for ${existingPath}:`,
|
||||
error,
|
||||
);
|
||||
});
|
||||
this.watchers.delete(existingPath);
|
||||
}
|
||||
}
|
||||
|
||||
for (const watchPath of desiredPaths) {
|
||||
if (this.watchers.has(watchPath)) {
|
||||
continue;
|
||||
}
|
||||
|
||||
try {
|
||||
const watcher = watchFs(watchPath, {
|
||||
ignoreInitial: true,
|
||||
})
|
||||
.on('all', () => {
|
||||
this.scheduleRefresh();
|
||||
})
|
||||
.on('error', (error) => {
|
||||
console.warn(`Skills watcher error for ${watchPath}:`, error);
|
||||
});
|
||||
this.watchers.set(watchPath, watcher);
|
||||
} catch (error) {
|
||||
console.warn(
|
||||
`Failed to watch skills directory at ${watchPath}:`,
|
||||
error,
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private scheduleRefresh(): void {
|
||||
if (this.refreshTimer) {
|
||||
clearTimeout(this.refreshTimer);
|
||||
}
|
||||
|
||||
this.refreshTimer = setTimeout(() => {
|
||||
this.refreshTimer = null;
|
||||
void this.refreshCache().then(() => this.updateWatchersFromCache());
|
||||
}, 150);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,98 +1,67 @@
|
||||
// Vitest Snapshot v1, https://vitest.dev/guide/snapshot.html
|
||||
|
||||
exports[`ShellTool > getDescription > should return the non-windows description when not on windows 1`] = `
|
||||
"Executes a given shell command (as \`bash -c <command>\`) in a persistent shell session with optional timeout, ensuring proper handling and security measures.
|
||||
"This tool executes a given shell command as \`bash -c <command>\`. Command can start background processes using \`&\`. Command is executed as a subprocess that leads its own process group. Command process group can be terminated as \`kill -- -PGID\` or signaled as \`kill -s SIGNAL -- -PGID\`.
|
||||
|
||||
IMPORTANT: This tool is for terminal operations like git, npm, docker, etc. DO NOT use it for file operations (reading, writing, editing, searching, finding files) - use the specialized tools for this instead.
|
||||
**Background vs Foreground Execution:**
|
||||
You should decide whether commands should run in background or foreground based on their nature:
|
||||
|
||||
**Use background execution (is_background: true) for:**
|
||||
- Long-running development servers: \`npm run start\`, \`npm run dev\`, \`yarn dev\`, \`bun run start\`
|
||||
- Build watchers: \`npm run watch\`, \`webpack --watch\`
|
||||
- Database servers: \`mongod\`, \`mysql\`, \`redis-server\`
|
||||
- Web servers: \`python -m http.server\`, \`php -S localhost:8000\`
|
||||
- Any command expected to run indefinitely until manually stopped
|
||||
|
||||
**Use foreground execution (is_background: false) for:**
|
||||
- One-time commands: \`ls\`, \`cat\`, \`grep\`
|
||||
- Build commands: \`npm run build\`, \`make\`
|
||||
- Installation commands: \`npm install\`, \`pip install\`
|
||||
- Git operations: \`git commit\`, \`git push\`
|
||||
- Test runs: \`npm test\`, \`pytest\`
|
||||
|
||||
The following information is returned:
|
||||
|
||||
**Usage notes**:
|
||||
- The command argument is required.
|
||||
- You can specify an optional timeout in milliseconds (up to 600000ms / 10 minutes). If not specified, commands will timeout after 120000ms (2 minutes).
|
||||
- It is very helpful if you write a clear, concise description of what this command does in 5-10 words.
|
||||
|
||||
- Avoid using run_shell_command with the \`find\`, \`grep\`, \`cat\`, \`head\`, \`tail\`, \`sed\`, \`awk\`, or \`echo\` commands, unless explicitly instructed or when these commands are truly necessary for the task. Instead, always prefer using the dedicated tools for these commands:
|
||||
- File search: Use glob (NOT find or ls)
|
||||
- Content search: Use grep_search (NOT grep or rg)
|
||||
- Read files: Use read_file (NOT cat/head/tail)
|
||||
- Edit files: Use edit (NOT sed/awk)
|
||||
- Write files: Use write_file (NOT echo >/cat <<EOF)
|
||||
- Communication: Output text directly (NOT echo/printf)
|
||||
- When issuing multiple commands:
|
||||
- If the commands are independent and can run in parallel, make multiple run_shell_command tool calls in a single message. For example, if you need to run "git status" and "git diff", send a single message with two run_shell_command tool calls in parallel.
|
||||
- If the commands depend on each other and must run sequentially, use a single run_shell_command call with '&&' to chain them together (e.g., \`git add . && git commit -m "message" && git push\`). For instance, if one operation must complete before another starts (like mkdir before cp, Write before run_shell_command for git operations, or git add before git commit), run these operations sequentially instead.
|
||||
- Use ';' only when you need to run commands sequentially but don't care if earlier commands fail
|
||||
- DO NOT use newlines to separate commands (newlines are ok in quoted strings)
|
||||
- Try to maintain your current working directory throughout the session by using absolute paths and avoiding usage of \`cd\`. You may use \`cd\` if the User explicitly requests it.
|
||||
<good-example>
|
||||
pytest /foo/bar/tests
|
||||
</good-example>
|
||||
<bad-example>
|
||||
cd /foo/bar && pytest tests
|
||||
</bad-example>
|
||||
|
||||
**Background vs Foreground Execution:**
|
||||
- You should decide whether commands should run in background or foreground based on their nature:
|
||||
- Use background execution (is_background: true) for:
|
||||
- Long-running development servers: \`npm run start\`, \`npm run dev\`, \`yarn dev\`, \`bun run start\`
|
||||
- Build watchers: \`npm run watch\`, \`webpack --watch\`
|
||||
- Database servers: \`mongod\`, \`mysql\`, \`redis-server\`
|
||||
- Web servers: \`python -m http.server\`, \`php -S localhost:8000\`
|
||||
- Any command expected to run indefinitely until manually stopped
|
||||
|
||||
- Command is executed as a subprocess that leads its own process group. Command process group can be terminated as \`kill -- -PGID\` or signaled as \`kill -s SIGNAL -- -PGID\`.
|
||||
- Use foreground execution (is_background: false) for:
|
||||
- One-time commands: \`ls\`, \`cat\`, \`grep\`
|
||||
- Build commands: \`npm run build\`, \`make\`
|
||||
- Installation commands: \`npm install\`, \`pip install\`
|
||||
- Git operations: \`git commit\`, \`git push\`
|
||||
- Test runs: \`npm test\`, \`pytest\`
|
||||
"
|
||||
Command: Executed command.
|
||||
Directory: Directory where command was executed, or \`(root)\`.
|
||||
Stdout: Output on stdout stream. Can be \`(empty)\` or partial on error and for any unwaited background processes.
|
||||
Stderr: Output on stderr stream. Can be \`(empty)\` or partial on error and for any unwaited background processes.
|
||||
Error: Error or \`(none)\` if no error was reported for the subprocess.
|
||||
Exit Code: Exit code or \`(none)\` if terminated by signal.
|
||||
Signal: Signal number or \`(none)\` if no signal was received.
|
||||
Background PIDs: List of background processes started or \`(none)\`.
|
||||
Process Group PGID: Process group started or \`(none)\`"
|
||||
`;
|
||||
|
||||
exports[`ShellTool > getDescription > should return the windows description when on windows 1`] = `
|
||||
"Executes a given shell command (as \`cmd.exe /c <command>\`) in a persistent shell session with optional timeout, ensuring proper handling and security measures.
|
||||
"This tool executes a given shell command as \`cmd.exe /c <command>\`. Command can start background processes using \`start /b\`.
|
||||
|
||||
IMPORTANT: This tool is for terminal operations like git, npm, docker, etc. DO NOT use it for file operations (reading, writing, editing, searching, finding files) - use the specialized tools for this instead.
|
||||
**Background vs Foreground Execution:**
|
||||
You should decide whether commands should run in background or foreground based on their nature:
|
||||
|
||||
**Use background execution (is_background: true) for:**
|
||||
- Long-running development servers: \`npm run start\`, \`npm run dev\`, \`yarn dev\`, \`bun run start\`
|
||||
- Build watchers: \`npm run watch\`, \`webpack --watch\`
|
||||
- Database servers: \`mongod\`, \`mysql\`, \`redis-server\`
|
||||
- Web servers: \`python -m http.server\`, \`php -S localhost:8000\`
|
||||
- Any command expected to run indefinitely until manually stopped
|
||||
|
||||
**Use foreground execution (is_background: false) for:**
|
||||
- One-time commands: \`ls\`, \`cat\`, \`grep\`
|
||||
- Build commands: \`npm run build\`, \`make\`
|
||||
- Installation commands: \`npm install\`, \`pip install\`
|
||||
- Git operations: \`git commit\`, \`git push\`
|
||||
- Test runs: \`npm test\`, \`pytest\`
|
||||
|
||||
The following information is returned:
|
||||
|
||||
**Usage notes**:
|
||||
- The command argument is required.
|
||||
- You can specify an optional timeout in milliseconds (up to 600000ms / 10 minutes). If not specified, commands will timeout after 120000ms (2 minutes).
|
||||
- It is very helpful if you write a clear, concise description of what this command does in 5-10 words.
|
||||
|
||||
- Avoid using run_shell_command with the \`find\`, \`grep\`, \`cat\`, \`head\`, \`tail\`, \`sed\`, \`awk\`, or \`echo\` commands, unless explicitly instructed or when these commands are truly necessary for the task. Instead, always prefer using the dedicated tools for these commands:
|
||||
- File search: Use glob (NOT find or ls)
|
||||
- Content search: Use grep_search (NOT grep or rg)
|
||||
- Read files: Use read_file (NOT cat/head/tail)
|
||||
- Edit files: Use edit (NOT sed/awk)
|
||||
- Write files: Use write_file (NOT echo >/cat <<EOF)
|
||||
- Communication: Output text directly (NOT echo/printf)
|
||||
- When issuing multiple commands:
|
||||
- If the commands are independent and can run in parallel, make multiple run_shell_command tool calls in a single message. For example, if you need to run "git status" and "git diff", send a single message with two run_shell_command tool calls in parallel.
|
||||
- If the commands depend on each other and must run sequentially, use a single run_shell_command call with '&&' to chain them together (e.g., \`git add . && git commit -m "message" && git push\`). For instance, if one operation must complete before another starts (like mkdir before cp, Write before run_shell_command for git operations, or git add before git commit), run these operations sequentially instead.
|
||||
- Use ';' only when you need to run commands sequentially but don't care if earlier commands fail
|
||||
- DO NOT use newlines to separate commands (newlines are ok in quoted strings)
|
||||
- Try to maintain your current working directory throughout the session by using absolute paths and avoiding usage of \`cd\`. You may use \`cd\` if the User explicitly requests it.
|
||||
<good-example>
|
||||
pytest /foo/bar/tests
|
||||
</good-example>
|
||||
<bad-example>
|
||||
cd /foo/bar && pytest tests
|
||||
</bad-example>
|
||||
|
||||
**Background vs Foreground Execution:**
|
||||
- You should decide whether commands should run in background or foreground based on their nature:
|
||||
- Use background execution (is_background: true) for:
|
||||
- Long-running development servers: \`npm run start\`, \`npm run dev\`, \`yarn dev\`, \`bun run start\`
|
||||
- Build watchers: \`npm run watch\`, \`webpack --watch\`
|
||||
- Database servers: \`mongod\`, \`mysql\`, \`redis-server\`
|
||||
- Web servers: \`python -m http.server\`, \`php -S localhost:8000\`
|
||||
- Any command expected to run indefinitely until manually stopped
|
||||
|
||||
- Use foreground execution (is_background: false) for:
|
||||
- One-time commands: \`ls\`, \`cat\`, \`grep\`
|
||||
- Build commands: \`npm run build\`, \`make\`
|
||||
- Installation commands: \`npm install\`, \`pip install\`
|
||||
- Git operations: \`git commit\`, \`git push\`
|
||||
- Test runs: \`npm test\`, \`pytest\`
|
||||
"
|
||||
Command: Executed command.
|
||||
Directory: Directory where command was executed, or \`(root)\`.
|
||||
Stdout: Output on stdout stream. Can be \`(empty)\` or partial on error and for any unwaited background processes.
|
||||
Stderr: Output on stderr stream. Can be \`(empty)\` or partial on error and for any unwaited background processes.
|
||||
Error: Error or \`(none)\` if no error was reported for the subprocess.
|
||||
Exit Code: Exit code or \`(none)\` if terminated by signal.
|
||||
Signal: Signal number or \`(none)\` if no signal was received.
|
||||
Background PIDs: List of background processes started or \`(none)\`.
|
||||
Process Group PGID: Process group started or \`(none)\`"
|
||||
`;
|
||||
|
||||
@@ -59,9 +59,6 @@ describe('ShellTool', () => {
|
||||
getWorkspaceContext: vi
|
||||
.fn()
|
||||
.mockReturnValue(createMockWorkspaceContext('/test/dir')),
|
||||
storage: {
|
||||
getUserSkillsDir: vi.fn().mockReturnValue('/test/dir/.qwen/skills'),
|
||||
},
|
||||
getGeminiClient: vi.fn(),
|
||||
getGitCoAuthor: vi.fn().mockReturnValue({
|
||||
enabled: true,
|
||||
@@ -145,42 +142,6 @@ describe('ShellTool', () => {
|
||||
);
|
||||
});
|
||||
|
||||
it('should throw an error for a directory within the user skills directory', () => {
|
||||
expect(() =>
|
||||
shellTool.build({
|
||||
command: 'ls',
|
||||
directory: '/test/dir/.qwen/skills/my-skill',
|
||||
is_background: false,
|
||||
}),
|
||||
).toThrow(
|
||||
'Explicitly running shell commands from within the user skills directory is not allowed. Please use absolute paths for command parameter instead.',
|
||||
);
|
||||
});
|
||||
|
||||
it('should throw an error for the user skills directory itself', () => {
|
||||
expect(() =>
|
||||
shellTool.build({
|
||||
command: 'ls',
|
||||
directory: '/test/dir/.qwen/skills',
|
||||
is_background: false,
|
||||
}),
|
||||
).toThrow(
|
||||
'Explicitly running shell commands from within the user skills directory is not allowed. Please use absolute paths for command parameter instead.',
|
||||
);
|
||||
});
|
||||
|
||||
it('should resolve directory path before checking user skills directory', () => {
|
||||
expect(() =>
|
||||
shellTool.build({
|
||||
command: 'ls',
|
||||
directory: '/test/dir/.qwen/skills/../skills/my-skill',
|
||||
is_background: false,
|
||||
}),
|
||||
).toThrow(
|
||||
'Explicitly running shell commands from within the user skills directory is not allowed. Please use absolute paths for command parameter instead.',
|
||||
);
|
||||
});
|
||||
|
||||
it('should return an invocation for a valid absolute directory path', () => {
|
||||
(mockConfig.getWorkspaceContext as Mock).mockReturnValue(
|
||||
createMockWorkspaceContext('/test/dir', ['/another/workspace']),
|
||||
@@ -709,7 +670,7 @@ describe('ShellTool', () => {
|
||||
),
|
||||
expect.any(String),
|
||||
expect.any(Function),
|
||||
expect.any(AbortSignal),
|
||||
mockAbortSignal,
|
||||
false,
|
||||
{},
|
||||
);
|
||||
@@ -900,7 +861,7 @@ describe('ShellTool', () => {
|
||||
),
|
||||
expect.any(String),
|
||||
expect.any(Function),
|
||||
expect.any(AbortSignal),
|
||||
mockAbortSignal,
|
||||
false,
|
||||
{},
|
||||
);
|
||||
@@ -909,8 +870,8 @@ describe('ShellTool', () => {
|
||||
it('should add co-author to git commit with multi-line message', async () => {
|
||||
const command = `git commit -m "Fix bug
|
||||
|
||||
This is a detailed description
|
||||
spanning multiple lines"`;
|
||||
This is a detailed description
|
||||
spanning multiple lines"`;
|
||||
const invocation = shellTool.build({ command, is_background: false });
|
||||
const promise = invocation.execute(mockAbortSignal);
|
||||
|
||||
@@ -933,7 +894,7 @@ describe('ShellTool', () => {
|
||||
),
|
||||
expect.any(String),
|
||||
expect.any(Function),
|
||||
expect.any(AbortSignal),
|
||||
mockAbortSignal,
|
||||
false,
|
||||
{},
|
||||
);
|
||||
@@ -1038,248 +999,4 @@ describe('ShellTool', () => {
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('timeout parameter', () => {
|
||||
it('should validate timeout parameter correctly', () => {
|
||||
// Valid timeout
|
||||
expect(() => {
|
||||
shellTool.build({
|
||||
command: 'echo test',
|
||||
is_background: false,
|
||||
timeout: 5000,
|
||||
});
|
||||
}).not.toThrow();
|
||||
|
||||
// Valid small timeout
|
||||
expect(() => {
|
||||
shellTool.build({
|
||||
command: 'echo test',
|
||||
is_background: false,
|
||||
timeout: 500,
|
||||
});
|
||||
}).not.toThrow();
|
||||
|
||||
// Zero timeout
|
||||
expect(() => {
|
||||
shellTool.build({
|
||||
command: 'echo test',
|
||||
is_background: false,
|
||||
timeout: 0,
|
||||
});
|
||||
}).toThrow('Timeout must be a positive number.');
|
||||
|
||||
// Negative timeout
|
||||
expect(() => {
|
||||
shellTool.build({
|
||||
command: 'echo test',
|
||||
is_background: false,
|
||||
timeout: -1000,
|
||||
});
|
||||
}).toThrow('Timeout must be a positive number.');
|
||||
|
||||
// Timeout too large
|
||||
expect(() => {
|
||||
shellTool.build({
|
||||
command: 'echo test',
|
||||
is_background: false,
|
||||
timeout: 700000,
|
||||
});
|
||||
}).toThrow('Timeout cannot exceed 600000ms (10 minutes).');
|
||||
|
||||
// Non-integer timeout
|
||||
expect(() => {
|
||||
shellTool.build({
|
||||
command: 'echo test',
|
||||
is_background: false,
|
||||
timeout: 5000.5,
|
||||
});
|
||||
}).toThrow('Timeout must be an integer number of milliseconds.');
|
||||
|
||||
// Non-number timeout (schema validation catches this first)
|
||||
expect(() => {
|
||||
shellTool.build({
|
||||
command: 'echo test',
|
||||
is_background: false,
|
||||
timeout: 'invalid' as unknown as number,
|
||||
});
|
||||
}).toThrow('params/timeout must be number');
|
||||
});
|
||||
|
||||
it('should include timeout in description for foreground commands', () => {
|
||||
const invocation = shellTool.build({
|
||||
command: 'npm test',
|
||||
is_background: false,
|
||||
timeout: 30000,
|
||||
});
|
||||
|
||||
expect(invocation.getDescription()).toBe('npm test [timeout: 30000ms]');
|
||||
});
|
||||
|
||||
it('should not include timeout in description for background commands', () => {
|
||||
const invocation = shellTool.build({
|
||||
command: 'npm start',
|
||||
is_background: true,
|
||||
timeout: 30000,
|
||||
});
|
||||
|
||||
expect(invocation.getDescription()).toBe('npm start [background]');
|
||||
});
|
||||
|
||||
it('should create combined signal with timeout for foreground execution', async () => {
|
||||
const mockAbortSignal = new AbortController().signal;
|
||||
const invocation = shellTool.build({
|
||||
command: 'sleep 1',
|
||||
is_background: false,
|
||||
timeout: 5000,
|
||||
});
|
||||
|
||||
const promise = invocation.execute(mockAbortSignal);
|
||||
|
||||
resolveExecutionPromise({
|
||||
rawOutput: Buffer.from(''),
|
||||
output: '',
|
||||
exitCode: 0,
|
||||
signal: null,
|
||||
error: null,
|
||||
aborted: false,
|
||||
pid: 12345,
|
||||
executionMethod: 'child_process',
|
||||
});
|
||||
|
||||
await promise;
|
||||
|
||||
// Verify that ShellExecutionService was called with a combined signal
|
||||
expect(mockShellExecutionService).toHaveBeenCalledWith(
|
||||
expect.any(String),
|
||||
expect.any(String),
|
||||
expect.any(Function),
|
||||
expect.any(AbortSignal),
|
||||
false,
|
||||
{},
|
||||
);
|
||||
|
||||
// The signal passed should be different from the original signal
|
||||
const calledSignal = mockShellExecutionService.mock.calls[0][3];
|
||||
expect(calledSignal).not.toBe(mockAbortSignal);
|
||||
});
|
||||
|
||||
it('should not create timeout signal for background execution', async () => {
|
||||
const mockAbortSignal = new AbortController().signal;
|
||||
const invocation = shellTool.build({
|
||||
command: 'npm start',
|
||||
is_background: true,
|
||||
timeout: 5000,
|
||||
});
|
||||
|
||||
const promise = invocation.execute(mockAbortSignal);
|
||||
|
||||
resolveExecutionPromise({
|
||||
rawOutput: Buffer.from(''),
|
||||
output: 'Background command started. PID: 12345',
|
||||
exitCode: 0,
|
||||
signal: null,
|
||||
error: null,
|
||||
aborted: false,
|
||||
pid: 12345,
|
||||
executionMethod: 'child_process',
|
||||
});
|
||||
|
||||
await promise;
|
||||
|
||||
// For background execution, the original signal should be used
|
||||
expect(mockShellExecutionService).toHaveBeenCalledWith(
|
||||
expect.any(String),
|
||||
expect.any(String),
|
||||
expect.any(Function),
|
||||
mockAbortSignal,
|
||||
false,
|
||||
{},
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle timeout vs user cancellation correctly', async () => {
|
||||
const userAbortController = new AbortController();
|
||||
const invocation = shellTool.build({
|
||||
command: 'sleep 10',
|
||||
is_background: false,
|
||||
timeout: 5000,
|
||||
});
|
||||
|
||||
// Mock AbortSignal.timeout and AbortSignal.any
|
||||
const mockTimeoutSignal = {
|
||||
aborted: false,
|
||||
addEventListener: vi.fn(),
|
||||
removeEventListener: vi.fn(),
|
||||
} as unknown as AbortSignal;
|
||||
|
||||
const mockCombinedSignal = {
|
||||
aborted: true,
|
||||
addEventListener: vi.fn(),
|
||||
removeEventListener: vi.fn(),
|
||||
} as unknown as AbortSignal;
|
||||
|
||||
const originalAbortSignal = globalThis.AbortSignal;
|
||||
vi.stubGlobal('AbortSignal', {
|
||||
...originalAbortSignal,
|
||||
timeout: vi.fn().mockReturnValue(mockTimeoutSignal),
|
||||
any: vi.fn().mockReturnValue(mockCombinedSignal),
|
||||
});
|
||||
|
||||
const promise = invocation.execute(userAbortController.signal);
|
||||
|
||||
resolveExecutionPromise({
|
||||
rawOutput: Buffer.from('partial output'),
|
||||
output: 'partial output',
|
||||
exitCode: null,
|
||||
signal: null,
|
||||
error: null,
|
||||
aborted: true,
|
||||
pid: 12345,
|
||||
executionMethod: 'child_process',
|
||||
});
|
||||
|
||||
const result = await promise;
|
||||
|
||||
// Restore original AbortSignal
|
||||
vi.stubGlobal('AbortSignal', originalAbortSignal);
|
||||
|
||||
expect(result.llmContent).toContain('Command timed out after 5000ms');
|
||||
expect(result.llmContent).toContain(
|
||||
'Below is the output before it timed out',
|
||||
);
|
||||
});
|
||||
|
||||
it('should use default timeout behavior when timeout is not specified', async () => {
|
||||
const mockAbortSignal = new AbortController().signal;
|
||||
const invocation = shellTool.build({
|
||||
command: 'echo test',
|
||||
is_background: false,
|
||||
});
|
||||
|
||||
const promise = invocation.execute(mockAbortSignal);
|
||||
|
||||
resolveExecutionPromise({
|
||||
rawOutput: Buffer.from('test'),
|
||||
output: 'test',
|
||||
exitCode: 0,
|
||||
signal: null,
|
||||
error: null,
|
||||
aborted: false,
|
||||
pid: 12345,
|
||||
executionMethod: 'child_process',
|
||||
});
|
||||
|
||||
await promise;
|
||||
|
||||
// Should create a combined signal with the default timeout when no timeout is specified
|
||||
expect(mockShellExecutionService).toHaveBeenCalledWith(
|
||||
expect.any(String),
|
||||
expect.any(String),
|
||||
expect.any(Function),
|
||||
expect.any(AbortSignal),
|
||||
false,
|
||||
{},
|
||||
);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@@ -34,7 +34,6 @@ import type {
|
||||
import { ShellExecutionService } from '../services/shellExecutionService.js';
|
||||
import { formatMemoryUsage } from '../utils/formatters.js';
|
||||
import type { AnsiOutput } from '../utils/terminalSerializer.js';
|
||||
import { isSubpath } from '../utils/paths.js';
|
||||
import {
|
||||
getCommandRoots,
|
||||
isCommandAllowed,
|
||||
@@ -43,12 +42,10 @@ import {
|
||||
} from '../utils/shell-utils.js';
|
||||
|
||||
export const OUTPUT_UPDATE_INTERVAL_MS = 1000;
|
||||
const DEFAULT_FOREGROUND_TIMEOUT_MS = 120000;
|
||||
|
||||
export interface ShellToolParams {
|
||||
command: string;
|
||||
is_background: boolean;
|
||||
timeout?: number;
|
||||
description?: string;
|
||||
directory?: string;
|
||||
}
|
||||
@@ -75,9 +72,6 @@ export class ShellToolInvocation extends BaseToolInvocation<
|
||||
// append background indicator
|
||||
if (this.params.is_background) {
|
||||
description += ` [background]`;
|
||||
} else if (this.params.timeout) {
|
||||
// append timeout for foreground commands
|
||||
description += ` [timeout: ${this.params.timeout}ms]`;
|
||||
}
|
||||
// append optional (description), replacing any line breaks with spaces
|
||||
if (this.params.description) {
|
||||
@@ -136,17 +130,6 @@ export class ShellToolInvocation extends BaseToolInvocation<
|
||||
};
|
||||
}
|
||||
|
||||
const effectiveTimeout = this.params.is_background
|
||||
? undefined
|
||||
: (this.params.timeout ?? DEFAULT_FOREGROUND_TIMEOUT_MS);
|
||||
|
||||
// Create combined signal with timeout for foreground execution
|
||||
let combinedSignal = signal;
|
||||
if (effectiveTimeout) {
|
||||
const timeoutSignal = AbortSignal.timeout(effectiveTimeout);
|
||||
combinedSignal = AbortSignal.any([signal, timeoutSignal]);
|
||||
}
|
||||
|
||||
const isWindows = os.platform() === 'win32';
|
||||
const tempFileName = `shell_pgrep_${crypto
|
||||
.randomBytes(6)
|
||||
@@ -236,7 +219,7 @@ export class ShellToolInvocation extends BaseToolInvocation<
|
||||
lastUpdateTime = Date.now();
|
||||
}
|
||||
},
|
||||
combinedSignal,
|
||||
signal,
|
||||
this.config.getShouldUseNodePtyShell(),
|
||||
shellExecutionConfig ?? {},
|
||||
);
|
||||
@@ -287,28 +270,11 @@ export class ShellToolInvocation extends BaseToolInvocation<
|
||||
|
||||
let llmContent = '';
|
||||
if (result.aborted) {
|
||||
// Check if it was a timeout or user cancellation
|
||||
const wasTimeout =
|
||||
!this.params.is_background &&
|
||||
effectiveTimeout &&
|
||||
combinedSignal.aborted &&
|
||||
!signal.aborted;
|
||||
|
||||
if (wasTimeout) {
|
||||
llmContent = `Command timed out after ${effectiveTimeout}ms before it could complete.`;
|
||||
if (result.output.trim()) {
|
||||
llmContent += ` Below is the output before it timed out:\n${result.output}`;
|
||||
} else {
|
||||
llmContent += ' There was no output before it timed out.';
|
||||
}
|
||||
llmContent = 'Command was cancelled by user before it could complete.';
|
||||
if (result.output.trim()) {
|
||||
llmContent += ` Below is the output before it was cancelled:\n${result.output}`;
|
||||
} else {
|
||||
llmContent =
|
||||
'Command was cancelled by user before it could complete.';
|
||||
if (result.output.trim()) {
|
||||
llmContent += ` Below is the output before it was cancelled:\n${result.output}`;
|
||||
} else {
|
||||
llmContent += ' There was no output before it was cancelled.';
|
||||
}
|
||||
llmContent += ' There was no output before it was cancelled.';
|
||||
}
|
||||
} else {
|
||||
// Create a formatted error string for display, replacing the wrapper command
|
||||
@@ -339,16 +305,7 @@ export class ShellToolInvocation extends BaseToolInvocation<
|
||||
returnDisplayMessage = result.output;
|
||||
} else {
|
||||
if (result.aborted) {
|
||||
// Check if it was a timeout or user cancellation
|
||||
const wasTimeout =
|
||||
!this.params.is_background &&
|
||||
effectiveTimeout &&
|
||||
combinedSignal.aborted &&
|
||||
!signal.aborted;
|
||||
|
||||
returnDisplayMessage = wasTimeout
|
||||
? `Command timed out after ${effectiveTimeout}ms.`
|
||||
: 'Command cancelled by user.';
|
||||
returnDisplayMessage = 'Command cancelled by user.';
|
||||
} else if (result.signal) {
|
||||
returnDisplayMessage = `Command terminated by signal: ${result.signal}`;
|
||||
} else if (result.error) {
|
||||
@@ -449,59 +406,42 @@ Co-authored-by: ${gitCoAuthorSettings.name} <${gitCoAuthorSettings.email}>`;
|
||||
}
|
||||
|
||||
function getShellToolDescription(): string {
|
||||
const isWindows = os.platform() === 'win32';
|
||||
const executionWrapper = isWindows
|
||||
? 'cmd.exe /c <command>'
|
||||
: 'bash -c <command>';
|
||||
const processGroupNote = isWindows
|
||||
? ''
|
||||
: '\n - Command is executed as a subprocess that leads its own process group. Command process group can be terminated as `kill -- -PGID` or signaled as `kill -s SIGNAL -- -PGID`.';
|
||||
const toolDescription = `
|
||||
|
||||
return `Executes a given shell command (as \`${executionWrapper}\`) in a persistent shell session with optional timeout, ensuring proper handling and security measures.
|
||||
**Background vs Foreground Execution:**
|
||||
You should decide whether commands should run in background or foreground based on their nature:
|
||||
|
||||
**Use background execution (is_background: true) for:**
|
||||
- Long-running development servers: \`npm run start\`, \`npm run dev\`, \`yarn dev\`, \`bun run start\`
|
||||
- Build watchers: \`npm run watch\`, \`webpack --watch\`
|
||||
- Database servers: \`mongod\`, \`mysql\`, \`redis-server\`
|
||||
- Web servers: \`python -m http.server\`, \`php -S localhost:8000\`
|
||||
- Any command expected to run indefinitely until manually stopped
|
||||
|
||||
**Use foreground execution (is_background: false) for:**
|
||||
- One-time commands: \`ls\`, \`cat\`, \`grep\`
|
||||
- Build commands: \`npm run build\`, \`make\`
|
||||
- Installation commands: \`npm install\`, \`pip install\`
|
||||
- Git operations: \`git commit\`, \`git push\`
|
||||
- Test runs: \`npm test\`, \`pytest\`
|
||||
|
||||
The following information is returned:
|
||||
|
||||
IMPORTANT: This tool is for terminal operations like git, npm, docker, etc. DO NOT use it for file operations (reading, writing, editing, searching, finding files) - use the specialized tools for this instead.
|
||||
Command: Executed command.
|
||||
Directory: Directory where command was executed, or \`(root)\`.
|
||||
Stdout: Output on stdout stream. Can be \`(empty)\` or partial on error and for any unwaited background processes.
|
||||
Stderr: Output on stderr stream. Can be \`(empty)\` or partial on error and for any unwaited background processes.
|
||||
Error: Error or \`(none)\` if no error was reported for the subprocess.
|
||||
Exit Code: Exit code or \`(none)\` if terminated by signal.
|
||||
Signal: Signal number or \`(none)\` if no signal was received.
|
||||
Background PIDs: List of background processes started or \`(none)\`.
|
||||
Process Group PGID: Process group started or \`(none)\``;
|
||||
|
||||
**Usage notes**:
|
||||
- The command argument is required.
|
||||
- You can specify an optional timeout in milliseconds (up to 600000ms / 10 minutes). If not specified, commands will timeout after 120000ms (2 minutes).
|
||||
- It is very helpful if you write a clear, concise description of what this command does in 5-10 words.
|
||||
|
||||
- Avoid using run_shell_command with the \`find\`, \`grep\`, \`cat\`, \`head\`, \`tail\`, \`sed\`, \`awk\`, or \`echo\` commands, unless explicitly instructed or when these commands are truly necessary for the task. Instead, always prefer using the dedicated tools for these commands:
|
||||
- File search: Use ${ToolNames.GLOB} (NOT find or ls)
|
||||
- Content search: Use ${ToolNames.GREP} (NOT grep or rg)
|
||||
- Read files: Use ${ToolNames.READ_FILE} (NOT cat/head/tail)
|
||||
- Edit files: Use ${ToolNames.EDIT} (NOT sed/awk)
|
||||
- Write files: Use ${ToolNames.WRITE_FILE} (NOT echo >/cat <<EOF)
|
||||
- Communication: Output text directly (NOT echo/printf)
|
||||
- When issuing multiple commands:
|
||||
- If the commands are independent and can run in parallel, make multiple run_shell_command tool calls in a single message. For example, if you need to run "git status" and "git diff", send a single message with two run_shell_command tool calls in parallel.
|
||||
- If the commands depend on each other and must run sequentially, use a single run_shell_command call with '&&' to chain them together (e.g., \`git add . && git commit -m "message" && git push\`). For instance, if one operation must complete before another starts (like mkdir before cp, Write before run_shell_command for git operations, or git add before git commit), run these operations sequentially instead.
|
||||
- Use ';' only when you need to run commands sequentially but don't care if earlier commands fail
|
||||
- DO NOT use newlines to separate commands (newlines are ok in quoted strings)
|
||||
- Try to maintain your current working directory throughout the session by using absolute paths and avoiding usage of \`cd\`. You may use \`cd\` if the User explicitly requests it.
|
||||
<good-example>
|
||||
pytest /foo/bar/tests
|
||||
</good-example>
|
||||
<bad-example>
|
||||
cd /foo/bar && pytest tests
|
||||
</bad-example>
|
||||
|
||||
**Background vs Foreground Execution:**
|
||||
- You should decide whether commands should run in background or foreground based on their nature:
|
||||
- Use background execution (is_background: true) for:
|
||||
- Long-running development servers: \`npm run start\`, \`npm run dev\`, \`yarn dev\`, \`bun run start\`
|
||||
- Build watchers: \`npm run watch\`, \`webpack --watch\`
|
||||
- Database servers: \`mongod\`, \`mysql\`, \`redis-server\`
|
||||
- Web servers: \`python -m http.server\`, \`php -S localhost:8000\`
|
||||
- Any command expected to run indefinitely until manually stopped
|
||||
${processGroupNote}
|
||||
- Use foreground execution (is_background: false) for:
|
||||
- One-time commands: \`ls\`, \`cat\`, \`grep\`
|
||||
- Build commands: \`npm run build\`, \`make\`
|
||||
- Installation commands: \`npm install\`, \`pip install\`
|
||||
- Git operations: \`git commit\`, \`git push\`
|
||||
- Test runs: \`npm test\`, \`pytest\`
|
||||
`;
|
||||
if (os.platform() === 'win32') {
|
||||
return `This tool executes a given shell command as \`cmd.exe /c <command>\`. Command can start background processes using \`start /b\`.${toolDescription}`;
|
||||
} else {
|
||||
return `This tool executes a given shell command as \`bash -c <command>\`. Command can start background processes using \`&\`. Command is executed as a subprocess that leads its own process group. Command process group can be terminated as \`kill -- -PGID\` or signaled as \`kill -s SIGNAL -- -PGID\`.${toolDescription}`;
|
||||
}
|
||||
}
|
||||
|
||||
function getCommandDescription(): string {
|
||||
@@ -545,10 +485,6 @@ export class ShellTool extends BaseDeclarativeTool<
|
||||
description:
|
||||
'Whether to run the command in background. Default is false. Set to true for long-running processes like development servers, watchers, or daemons that should continue running without blocking further commands.',
|
||||
},
|
||||
timeout: {
|
||||
type: 'number',
|
||||
description: 'Optional timeout in milliseconds (max 600000)',
|
||||
},
|
||||
description: {
|
||||
type: 'string',
|
||||
description:
|
||||
@@ -586,35 +522,10 @@ export class ShellTool extends BaseDeclarativeTool<
|
||||
if (getCommandRoots(params.command).length === 0) {
|
||||
return 'Could not identify command root to obtain permission from user.';
|
||||
}
|
||||
if (params.timeout !== undefined) {
|
||||
if (
|
||||
typeof params.timeout !== 'number' ||
|
||||
!Number.isInteger(params.timeout)
|
||||
) {
|
||||
return 'Timeout must be an integer number of milliseconds.';
|
||||
}
|
||||
if (params.timeout <= 0) {
|
||||
return 'Timeout must be a positive number.';
|
||||
}
|
||||
if (params.timeout > 600000) {
|
||||
return 'Timeout cannot exceed 600000ms (10 minutes).';
|
||||
}
|
||||
}
|
||||
if (params.directory) {
|
||||
if (!path.isAbsolute(params.directory)) {
|
||||
return 'Directory must be an absolute path.';
|
||||
}
|
||||
|
||||
const userSkillsDir = this.config.storage.getUserSkillsDir();
|
||||
const resolvedDirectoryPath = path.resolve(params.directory);
|
||||
const isWithinUserSkills = isSubpath(
|
||||
userSkillsDir,
|
||||
resolvedDirectoryPath,
|
||||
);
|
||||
if (isWithinUserSkills) {
|
||||
return `Explicitly running shell commands from within the user skills directory is not allowed. Please use absolute paths for command parameter instead.`;
|
||||
}
|
||||
|
||||
const workspaceDirs = this.config.getWorkspaceContext().getDirectories();
|
||||
const isWithinWorkspace = workspaceDirs.some((wsDir) =>
|
||||
params.directory!.startsWith(wsDir),
|
||||
|
||||
@@ -324,9 +324,7 @@ describe('SkillTool', () => {
|
||||
'Review code for quality and best practices.',
|
||||
);
|
||||
|
||||
expect(result.returnDisplay).toBe(
|
||||
'Specialized skill for reviewing code quality',
|
||||
);
|
||||
expect(result.returnDisplay).toBe('Launching skill: code-review');
|
||||
});
|
||||
|
||||
it('should include allowedTools in result when present', async () => {
|
||||
@@ -351,7 +349,7 @@ describe('SkillTool', () => {
|
||||
// Base description is omitted from llmContent; ensure body is present.
|
||||
expect(llmText).toContain('Help write comprehensive tests.');
|
||||
|
||||
expect(result.returnDisplay).toBe('Skill for writing and running tests');
|
||||
expect(result.returnDisplay).toBe('Launching skill: testing');
|
||||
});
|
||||
|
||||
it('should handle skill not found error', async () => {
|
||||
@@ -418,7 +416,7 @@ describe('SkillTool', () => {
|
||||
).createInvocation(params);
|
||||
const description = invocation.getDescription();
|
||||
|
||||
expect(description).toBe('Use skill: "code-review"');
|
||||
expect(description).toBe('Launching skill: "code-review"');
|
||||
});
|
||||
|
||||
it('should handle skill without additional files', async () => {
|
||||
@@ -438,9 +436,7 @@ describe('SkillTool', () => {
|
||||
const llmText = partToString(result.llmContent);
|
||||
expect(llmText).not.toContain('## Additional Files');
|
||||
|
||||
expect(result.returnDisplay).toBe(
|
||||
'Specialized skill for reviewing code quality',
|
||||
);
|
||||
expect(result.returnDisplay).toBe('Launching skill: code-review');
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@@ -49,7 +49,7 @@ export class SkillTool extends BaseDeclarativeTool<SkillParams, ToolResult> {
|
||||
'Execute a skill within the main conversation. Loading available skills...', // Initial description
|
||||
Kind.Read,
|
||||
initialSchema,
|
||||
false, // isOutputMarkdown
|
||||
true, // isOutputMarkdown
|
||||
false, // canUpdateOutput
|
||||
);
|
||||
|
||||
@@ -128,10 +128,6 @@ Important:
|
||||
- Only use skills listed in <available_skills> below
|
||||
- Do not invoke a skill that is already running
|
||||
- Do not use this tool for built-in CLI commands (like /help, /clear, etc.)
|
||||
- When executing scripts or loading referenced files, ALWAYS resolve absolute paths from skill's base directory. Examples:
|
||||
- \`bash scripts/init.sh\` -> \`bash /path/to/skill/scripts/init.sh\`
|
||||
- \`python scripts/helper.py\` -> \`python /path/to/skill/scripts/helper.py\`
|
||||
- \`reference.md\` -> \`/path/to/skill/reference.md\`
|
||||
</skills_instructions>
|
||||
|
||||
<available_skills>
|
||||
@@ -187,7 +183,7 @@ class SkillToolInvocation extends BaseToolInvocation<SkillParams, ToolResult> {
|
||||
}
|
||||
|
||||
getDescription(): string {
|
||||
return `Use skill: "${this.params.skill}"`;
|
||||
return `Launching skill: "${this.params.skill}"`;
|
||||
}
|
||||
|
||||
override async shouldConfirmExecute(): Promise<false> {
|
||||
@@ -242,16 +238,16 @@ class SkillToolInvocation extends BaseToolInvocation<SkillParams, ToolResult> {
|
||||
const baseDir = path.dirname(skill.filePath);
|
||||
|
||||
// Build markdown content for LLM (show base dir, then body)
|
||||
const llmContent = `Base directory for this skill: ${baseDir}\nImportant: ALWAYS resolve absolute paths from this base directory when working with skills.\n\n${skill.body}\n`;
|
||||
const llmContent = `Base directory for this skill: ${baseDir}\n\n${skill.body}\n`;
|
||||
|
||||
return {
|
||||
llmContent: [{ text: llmContent }],
|
||||
returnDisplay: skill.description,
|
||||
returnDisplay: `Launching skill: ${skill.name}`,
|
||||
};
|
||||
} catch (error) {
|
||||
const errorMessage =
|
||||
error instanceof Error ? error.message : String(error);
|
||||
console.error(`[SkillsTool] Error using skill: ${errorMessage}`);
|
||||
console.error(`[SkillsTool] Error launching skill: ${errorMessage}`);
|
||||
|
||||
// Log failed skill launch
|
||||
logSkillLaunch(
|
||||
|
||||
@@ -17,7 +17,6 @@ import {
|
||||
import { WebViewProvider } from './webview/WebViewProvider.js';
|
||||
import { registerNewCommands } from './commands/index.js';
|
||||
import { ReadonlyFileSystemProvider } from './services/readonlyFileSystemProvider.js';
|
||||
import { isWindows } from './utils/platform.js';
|
||||
|
||||
const CLI_IDE_COMPANION_IDENTIFIER = 'qwenlm.qwen-code-vscode-ide-companion';
|
||||
const INFO_MESSAGE_SHOWN_KEY = 'qwenCodeInfoMessageShown';
|
||||
@@ -313,38 +312,13 @@ export async function activate(context: vscode.ExtensionContext) {
|
||||
'qwen-cli',
|
||||
'cli.js',
|
||||
).fsPath;
|
||||
const execPath = process.execPath;
|
||||
const lowerExecPath = execPath.toLowerCase();
|
||||
const needsElectronRunAsNode =
|
||||
lowerExecPath.includes('code') ||
|
||||
lowerExecPath.includes('electron');
|
||||
|
||||
let qwenCmd: string;
|
||||
const terminalOptions: vscode.TerminalOptions = {
|
||||
const quote = (s: string) => `"${s.replaceAll('"', '\\"')}"`;
|
||||
const qwenCmd = `${quote(process.execPath)} ${quote(cliEntry)}`;
|
||||
const terminal = vscode.window.createTerminal({
|
||||
name: `Qwen Code (${selectedFolder.name})`,
|
||||
cwd: selectedFolder.uri.fsPath,
|
||||
location,
|
||||
};
|
||||
|
||||
if (isWindows) {
|
||||
// Use system Node via cmd.exe; avoid PowerShell parsing issues
|
||||
const quoteCmd = (s: string) => `"${s.replace(/"/g, '""')}"`;
|
||||
const cliQuoted = quoteCmd(cliEntry);
|
||||
// TODO: @yiliang114, temporarily run through node, and later hope to decouple from the local node
|
||||
qwenCmd = `node ${cliQuoted}`;
|
||||
terminalOptions.shellPath = process.env.ComSpec;
|
||||
} else {
|
||||
const quotePosix = (s: string) => `"${s.replace(/"/g, '\\"')}"`;
|
||||
const baseCmd = `${quotePosix(execPath)} ${quotePosix(cliEntry)}`;
|
||||
if (needsElectronRunAsNode) {
|
||||
// macOS Electron helper needs ELECTRON_RUN_AS_NODE=1;
|
||||
qwenCmd = `ELECTRON_RUN_AS_NODE=1 ${baseCmd}`;
|
||||
} else {
|
||||
qwenCmd = baseCmd;
|
||||
}
|
||||
}
|
||||
|
||||
const terminal = vscode.window.createTerminal(terminalOptions);
|
||||
});
|
||||
terminal.show();
|
||||
terminal.sendText(qwenCmd);
|
||||
}
|
||||
|
||||
@@ -6,8 +6,7 @@
|
||||
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
import * as vscode from 'vscode';
|
||||
import { OpenFilesManager } from './open-files-manager.js';
|
||||
import { MAX_FILES } from './services/open-files-manager/constants.js';
|
||||
import { OpenFilesManager, MAX_FILES } from './open-files-manager.js';
|
||||
|
||||
vi.mock('vscode', () => ({
|
||||
EventEmitter: vi.fn(() => {
|
||||
|
||||
@@ -9,23 +9,9 @@ import type {
|
||||
File,
|
||||
IdeContext,
|
||||
} from '@qwen-code/qwen-code-core/src/ide/types.js';
|
||||
import {
|
||||
isFileUri,
|
||||
isNotebookFileUri,
|
||||
isNotebookCellUri,
|
||||
removeFile,
|
||||
renameFile,
|
||||
getNotebookUriFromCellUri,
|
||||
} from './services/open-files-manager/utils.js';
|
||||
import {
|
||||
addOrMoveToFront,
|
||||
updateActiveContext,
|
||||
} from './services/open-files-manager/text-handler.js';
|
||||
import {
|
||||
addOrMoveToFrontNotebook,
|
||||
updateNotebookActiveContext,
|
||||
updateNotebookCellSelection,
|
||||
} from './services/open-files-manager/notebook-handler.js';
|
||||
|
||||
export const MAX_FILES = 10;
|
||||
const MAX_SELECTED_TEXT_LENGTH = 16384; // 16 KiB limit
|
||||
|
||||
/**
|
||||
* Keeps track of the workspace state, including open files, cursor position, and selected text.
|
||||
@@ -39,102 +25,33 @@ export class OpenFilesManager {
|
||||
constructor(private readonly context: vscode.ExtensionContext) {
|
||||
const editorWatcher = vscode.window.onDidChangeActiveTextEditor(
|
||||
(editor) => {
|
||||
if (editor && isFileUri(editor.document.uri)) {
|
||||
addOrMoveToFront(this.openFiles, editor);
|
||||
if (editor && this.isFileUri(editor.document.uri)) {
|
||||
this.addOrMoveToFront(editor);
|
||||
this.fireWithDebounce();
|
||||
} else if (editor && isNotebookCellUri(editor.document.uri)) {
|
||||
// Handle when a notebook cell becomes active (which indicates the notebook is active)
|
||||
const notebookUri = getNotebookUriFromCellUri(editor.document.uri);
|
||||
if (notebookUri && isNotebookFileUri(notebookUri)) {
|
||||
// Find the notebook editor for this cell
|
||||
const notebookEditor = vscode.window.visibleNotebookEditors.find(
|
||||
(nbEditor) =>
|
||||
nbEditor.notebook.uri.toString() === notebookUri.toString(),
|
||||
);
|
||||
if (notebookEditor) {
|
||||
addOrMoveToFrontNotebook(this.openFiles, notebookEditor);
|
||||
this.fireWithDebounce();
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
);
|
||||
|
||||
// Watch for when notebook editors gain focus by monitoring focus changes
|
||||
// Since VS Code doesn't have a direct onDidChangeActiveNotebookEditor event,
|
||||
// we monitor when visible notebook editors change and assume the last one shown is active
|
||||
let notebookFocusWatcher: vscode.Disposable | undefined;
|
||||
if (vscode.window.onDidChangeVisibleNotebookEditors) {
|
||||
notebookFocusWatcher = vscode.window.onDidChangeVisibleNotebookEditors(
|
||||
() => {
|
||||
// When visible notebook editors change, the currently focused one is likely the active one
|
||||
const activeNotebookEditor = vscode.window.activeNotebookEditor;
|
||||
if (
|
||||
activeNotebookEditor &&
|
||||
isNotebookFileUri(activeNotebookEditor.notebook.uri)
|
||||
) {
|
||||
addOrMoveToFrontNotebook(this.openFiles, activeNotebookEditor);
|
||||
this.fireWithDebounce();
|
||||
}
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
const selectionWatcher = vscode.window.onDidChangeTextEditorSelection(
|
||||
(event) => {
|
||||
if (isFileUri(event.textEditor.document.uri)) {
|
||||
updateActiveContext(this.openFiles, event.textEditor);
|
||||
this.fireWithDebounce();
|
||||
} else if (isNotebookCellUri(event.textEditor.document.uri)) {
|
||||
// Handle text selections within notebook cells
|
||||
updateNotebookCellSelection(
|
||||
this.openFiles,
|
||||
event.textEditor,
|
||||
event.selections,
|
||||
);
|
||||
if (this.isFileUri(event.textEditor.document.uri)) {
|
||||
this.updateActiveContext(event.textEditor);
|
||||
this.fireWithDebounce();
|
||||
}
|
||||
},
|
||||
);
|
||||
|
||||
// Add notebook cell selection watcher for .ipynb files if the API is available
|
||||
let notebookCellSelectionWatcher: vscode.Disposable | undefined;
|
||||
if (vscode.window.onDidChangeNotebookEditorSelection) {
|
||||
notebookCellSelectionWatcher =
|
||||
vscode.window.onDidChangeNotebookEditorSelection((event) => {
|
||||
if (isNotebookFileUri(event.notebookEditor.notebook.uri)) {
|
||||
// Ensure the notebook is added to the active list if selected
|
||||
addOrMoveToFrontNotebook(this.openFiles, event.notebookEditor);
|
||||
updateNotebookActiveContext(this.openFiles, event.notebookEditor);
|
||||
this.fireWithDebounce();
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
const closeWatcher = vscode.workspace.onDidCloseTextDocument((document) => {
|
||||
if (isFileUri(document.uri)) {
|
||||
removeFile(this.openFiles, document.uri);
|
||||
if (this.isFileUri(document.uri)) {
|
||||
this.remove(document.uri);
|
||||
this.fireWithDebounce();
|
||||
}
|
||||
});
|
||||
|
||||
// Add notebook close watcher if the API is available
|
||||
let notebookCloseWatcher: vscode.Disposable | undefined;
|
||||
if (vscode.workspace.onDidCloseNotebookDocument) {
|
||||
notebookCloseWatcher = vscode.workspace.onDidCloseNotebookDocument(
|
||||
(document) => {
|
||||
if (isNotebookFileUri(document.uri)) {
|
||||
removeFile(this.openFiles, document.uri);
|
||||
this.fireWithDebounce();
|
||||
}
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
const deleteWatcher = vscode.workspace.onDidDeleteFiles((event) => {
|
||||
for (const uri of event.files) {
|
||||
if (isFileUri(uri) || isNotebookFileUri(uri)) {
|
||||
removeFile(this.openFiles, uri);
|
||||
if (this.isFileUri(uri)) {
|
||||
this.remove(uri);
|
||||
}
|
||||
}
|
||||
this.fireWithDebounce();
|
||||
@@ -142,12 +59,12 @@ export class OpenFilesManager {
|
||||
|
||||
const renameWatcher = vscode.workspace.onDidRenameFiles((event) => {
|
||||
for (const { oldUri, newUri } of event.files) {
|
||||
if (isFileUri(oldUri) || isNotebookFileUri(oldUri)) {
|
||||
if (isFileUri(newUri) || isNotebookFileUri(newUri)) {
|
||||
renameFile(this.openFiles, oldUri, newUri);
|
||||
if (this.isFileUri(oldUri)) {
|
||||
if (this.isFileUri(newUri)) {
|
||||
this.rename(oldUri, newUri);
|
||||
} else {
|
||||
// The file was renamed to a non-file URI, so we should remove it.
|
||||
removeFile(this.openFiles, oldUri);
|
||||
this.remove(oldUri);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -162,37 +79,87 @@ export class OpenFilesManager {
|
||||
renameWatcher,
|
||||
);
|
||||
|
||||
// Conditionally add notebook-specific watchers if they were created
|
||||
if (notebookCellSelectionWatcher) {
|
||||
context.subscriptions.push(notebookCellSelectionWatcher);
|
||||
}
|
||||
|
||||
if (notebookCloseWatcher) {
|
||||
context.subscriptions.push(notebookCloseWatcher);
|
||||
}
|
||||
|
||||
if (notebookFocusWatcher) {
|
||||
context.subscriptions.push(notebookFocusWatcher);
|
||||
}
|
||||
|
||||
// Just add current active file on start-up.
|
||||
if (
|
||||
vscode.window.activeTextEditor &&
|
||||
isFileUri(vscode.window.activeTextEditor.document.uri)
|
||||
this.isFileUri(vscode.window.activeTextEditor.document.uri)
|
||||
) {
|
||||
addOrMoveToFront(this.openFiles, vscode.window.activeTextEditor);
|
||||
this.addOrMoveToFront(vscode.window.activeTextEditor);
|
||||
}
|
||||
}
|
||||
|
||||
private isFileUri(uri: vscode.Uri): boolean {
|
||||
return uri.scheme === 'file';
|
||||
}
|
||||
|
||||
private addOrMoveToFront(editor: vscode.TextEditor) {
|
||||
// Deactivate previous active file
|
||||
const currentActive = this.openFiles.find((f) => f.isActive);
|
||||
if (currentActive) {
|
||||
currentActive.isActive = false;
|
||||
currentActive.cursor = undefined;
|
||||
currentActive.selectedText = undefined;
|
||||
}
|
||||
|
||||
// Also add current active notebook if applicable and the API is available
|
||||
if (
|
||||
vscode.window.activeNotebookEditor &&
|
||||
isNotebookFileUri(vscode.window.activeNotebookEditor.notebook.uri)
|
||||
) {
|
||||
addOrMoveToFrontNotebook(
|
||||
this.openFiles,
|
||||
vscode.window.activeNotebookEditor,
|
||||
);
|
||||
// Remove if it exists
|
||||
const index = this.openFiles.findIndex(
|
||||
(f) => f.path === editor.document.uri.fsPath,
|
||||
);
|
||||
if (index !== -1) {
|
||||
this.openFiles.splice(index, 1);
|
||||
}
|
||||
|
||||
// Add to the front as active
|
||||
this.openFiles.unshift({
|
||||
path: editor.document.uri.fsPath,
|
||||
timestamp: Date.now(),
|
||||
isActive: true,
|
||||
});
|
||||
|
||||
// Enforce max length
|
||||
if (this.openFiles.length > MAX_FILES) {
|
||||
this.openFiles.pop();
|
||||
}
|
||||
|
||||
this.updateActiveContext(editor);
|
||||
}
|
||||
|
||||
private remove(uri: vscode.Uri) {
|
||||
const index = this.openFiles.findIndex((f) => f.path === uri.fsPath);
|
||||
if (index !== -1) {
|
||||
this.openFiles.splice(index, 1);
|
||||
}
|
||||
}
|
||||
|
||||
private rename(oldUri: vscode.Uri, newUri: vscode.Uri) {
|
||||
const index = this.openFiles.findIndex((f) => f.path === oldUri.fsPath);
|
||||
if (index !== -1) {
|
||||
this.openFiles[index].path = newUri.fsPath;
|
||||
}
|
||||
}
|
||||
|
||||
private updateActiveContext(editor: vscode.TextEditor) {
|
||||
const file = this.openFiles.find(
|
||||
(f) => f.path === editor.document.uri.fsPath,
|
||||
);
|
||||
if (!file || !file.isActive) {
|
||||
return;
|
||||
}
|
||||
|
||||
file.cursor = editor.selection.active
|
||||
? {
|
||||
line: editor.selection.active.line + 1,
|
||||
character: editor.selection.active.character,
|
||||
}
|
||||
: undefined;
|
||||
|
||||
let selectedText: string | undefined =
|
||||
editor.document.getText(editor.selection) || undefined;
|
||||
if (selectedText && selectedText.length > MAX_SELECTED_TEXT_LENGTH) {
|
||||
selectedText =
|
||||
selectedText.substring(0, MAX_SELECTED_TEXT_LENGTH) + '... [TRUNCATED]';
|
||||
}
|
||||
file.selectedText = selectedText;
|
||||
}
|
||||
|
||||
private fireWithDebounce() {
|
||||
|
||||
@@ -26,7 +26,6 @@ import type {
|
||||
} from '../types/connectionTypes.js';
|
||||
import { AcpFileHandler } from '../services/acpFileHandler.js';
|
||||
import type { ChildProcess } from 'child_process';
|
||||
import { isWindows } from '../utils/platform.js';
|
||||
|
||||
/**
|
||||
* ACP Message Handler Class
|
||||
@@ -48,7 +47,7 @@ export class AcpMessageHandler {
|
||||
sendResponseMessage(child: ChildProcess | null, response: AcpResponse): void {
|
||||
if (child?.stdin) {
|
||||
const jsonString = JSON.stringify(response);
|
||||
const lineEnding = isWindows ? '\r\n' : '\n';
|
||||
const lineEnding = process.platform === 'win32' ? '\r\n' : '\n';
|
||||
child.stdin.write(jsonString + lineEnding);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -19,7 +19,6 @@ import type { ApprovalModeValue } from '../types/approvalModeValueTypes.js';
|
||||
import { AGENT_METHODS } from '../constants/acpSchema.js';
|
||||
import type { PendingRequest } from '../types/connectionTypes.js';
|
||||
import type { ChildProcess } from 'child_process';
|
||||
import { isWindows } from '../utils/platform.js';
|
||||
|
||||
/**
|
||||
* ACP Session Manager Class
|
||||
@@ -103,7 +102,7 @@ export class AcpSessionManager {
|
||||
): void {
|
||||
if (child?.stdin) {
|
||||
const jsonString = JSON.stringify(message);
|
||||
const lineEnding = isWindows ? '\r\n' : '\n';
|
||||
const lineEnding = process.platform === 'win32' ? '\r\n' : '\n';
|
||||
child.stdin.write(jsonString + lineEnding);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,8 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
export const MAX_FILES = 10;
|
||||
export const MAX_SELECTED_TEXT_LENGTH = 16384; // 16 KiB limit
|
||||
@@ -1,119 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import * as vscode from 'vscode';
|
||||
import type { File } from '@qwen-code/qwen-code-core/src/ide/types.js';
|
||||
import { MAX_FILES, MAX_SELECTED_TEXT_LENGTH } from './constants.js';
|
||||
import {
|
||||
deactivateCurrentActiveFile,
|
||||
enforceMaxFiles,
|
||||
truncateSelectedText,
|
||||
getNotebookUriFromCellUri,
|
||||
} from './utils.js';
|
||||
|
||||
export function addOrMoveToFrontNotebook(
|
||||
openFiles: File[],
|
||||
notebookEditor: vscode.NotebookEditor,
|
||||
) {
|
||||
// Deactivate previous active file
|
||||
deactivateCurrentActiveFile(openFiles);
|
||||
|
||||
// Remove if it exists
|
||||
const index = openFiles.findIndex(
|
||||
(f) => f.path === notebookEditor.notebook.uri.fsPath,
|
||||
);
|
||||
if (index !== -1) {
|
||||
openFiles.splice(index, 1);
|
||||
}
|
||||
|
||||
// Add to the front as active
|
||||
openFiles.unshift({
|
||||
path: notebookEditor.notebook.uri.fsPath,
|
||||
timestamp: Date.now(),
|
||||
isActive: true,
|
||||
});
|
||||
|
||||
// Enforce max length
|
||||
enforceMaxFiles(openFiles, MAX_FILES);
|
||||
|
||||
updateNotebookActiveContext(openFiles, notebookEditor);
|
||||
}
|
||||
|
||||
export function updateNotebookActiveContext(
|
||||
openFiles: File[],
|
||||
notebookEditor: vscode.NotebookEditor,
|
||||
) {
|
||||
const file = openFiles.find(
|
||||
(f) => f.path === notebookEditor.notebook.uri.fsPath,
|
||||
);
|
||||
if (!file || !file.isActive) {
|
||||
return;
|
||||
}
|
||||
|
||||
// For notebook editors, selections may span multiple cells
|
||||
// We'll gather selected text from all selected cells
|
||||
const selections = notebookEditor.selections;
|
||||
let combinedSelectedText = '';
|
||||
|
||||
for (const selection of selections) {
|
||||
// Process each selected cell range
|
||||
for (let i = selection.start; i < selection.end; i++) {
|
||||
const cell = notebookEditor.notebook.cellAt(i);
|
||||
if (cell && cell.kind === vscode.NotebookCellKind.Code) {
|
||||
// For now, we'll get the full cell content if it's in a selection
|
||||
// TODO: Implement per-cell cursor position and finer-grained selection if needed
|
||||
combinedSelectedText += cell.document.getText() + '\n';
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (combinedSelectedText) {
|
||||
combinedSelectedText = combinedSelectedText.trim();
|
||||
file.selectedText = truncateSelectedText(
|
||||
combinedSelectedText,
|
||||
MAX_SELECTED_TEXT_LENGTH,
|
||||
);
|
||||
} else {
|
||||
file.selectedText = undefined;
|
||||
}
|
||||
}
|
||||
|
||||
export function updateNotebookCellSelection(
|
||||
openFiles: File[],
|
||||
cellEditor: vscode.TextEditor,
|
||||
selections: readonly vscode.Selection[],
|
||||
) {
|
||||
// Find the parent notebook by traversing the URI
|
||||
const notebookUri = getNotebookUriFromCellUri(cellEditor.document.uri);
|
||||
if (!notebookUri) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Find the corresponding file entry for this notebook
|
||||
const file = openFiles.find((f) => f.path === notebookUri.fsPath);
|
||||
if (!file || !file.isActive) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Extract the selected text from the cell editor
|
||||
let selectedText = '';
|
||||
for (const selection of selections) {
|
||||
const text = cellEditor.document.getText(selection);
|
||||
if (text) {
|
||||
selectedText += text + '\n';
|
||||
}
|
||||
}
|
||||
|
||||
if (selectedText) {
|
||||
selectedText = selectedText.trim();
|
||||
file.selectedText = truncateSelectedText(
|
||||
selectedText,
|
||||
MAX_SELECTED_TEXT_LENGTH,
|
||||
);
|
||||
} else {
|
||||
file.selectedText = undefined;
|
||||
}
|
||||
}
|
||||
@@ -1,61 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import type * as vscode from 'vscode';
|
||||
import type { File } from '@qwen-code/qwen-code-core/src/ide/types.js';
|
||||
import { MAX_FILES, MAX_SELECTED_TEXT_LENGTH } from './constants.js';
|
||||
import {
|
||||
deactivateCurrentActiveFile,
|
||||
enforceMaxFiles,
|
||||
truncateSelectedText,
|
||||
} from './utils.js';
|
||||
|
||||
export function addOrMoveToFront(openFiles: File[], editor: vscode.TextEditor) {
|
||||
// Deactivate previous active file
|
||||
deactivateCurrentActiveFile(openFiles);
|
||||
|
||||
// Remove if it exists
|
||||
const index = openFiles.findIndex(
|
||||
(f) => f.path === editor.document.uri.fsPath,
|
||||
);
|
||||
if (index !== -1) {
|
||||
openFiles.splice(index, 1);
|
||||
}
|
||||
|
||||
// Add to the front as active
|
||||
openFiles.unshift({
|
||||
path: editor.document.uri.fsPath,
|
||||
timestamp: Date.now(),
|
||||
isActive: true,
|
||||
});
|
||||
|
||||
// Enforce max length
|
||||
enforceMaxFiles(openFiles, MAX_FILES);
|
||||
|
||||
updateActiveContext(openFiles, editor);
|
||||
}
|
||||
|
||||
export function updateActiveContext(
|
||||
openFiles: File[],
|
||||
editor: vscode.TextEditor,
|
||||
) {
|
||||
const file = openFiles.find((f) => f.path === editor.document.uri.fsPath);
|
||||
if (!file || !file.isActive) {
|
||||
return;
|
||||
}
|
||||
|
||||
file.cursor = editor.selection.active
|
||||
? {
|
||||
line: editor.selection.active.line + 1,
|
||||
character: editor.selection.active.character,
|
||||
}
|
||||
: undefined;
|
||||
|
||||
let selectedText: string | undefined =
|
||||
editor.document.getText(editor.selection) || undefined;
|
||||
selectedText = truncateSelectedText(selectedText, MAX_SELECTED_TEXT_LENGTH);
|
||||
file.selectedText = selectedText;
|
||||
}
|
||||
@@ -1,101 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import * as vscode from 'vscode';
|
||||
import type { File } from '@qwen-code/qwen-code-core/src/ide/types.js';
|
||||
|
||||
export function isFileUri(uri: vscode.Uri): boolean {
|
||||
return uri.scheme === 'file';
|
||||
}
|
||||
|
||||
export function isNotebookFileUri(uri: vscode.Uri): boolean {
|
||||
return uri.scheme === 'file' && uri.path.toLowerCase().endsWith('.ipynb');
|
||||
}
|
||||
|
||||
export function isNotebookCellUri(uri: vscode.Uri): boolean {
|
||||
// Notebook cell URIs have the scheme 'vscode-notebook-cell'
|
||||
return uri.scheme === 'vscode-notebook-cell';
|
||||
}
|
||||
|
||||
export function removeFile(openFiles: File[], uri: vscode.Uri): void {
|
||||
const index = openFiles.findIndex((f) => f.path === uri.fsPath);
|
||||
if (index !== -1) {
|
||||
openFiles.splice(index, 1);
|
||||
}
|
||||
}
|
||||
|
||||
export function renameFile(
|
||||
openFiles: File[],
|
||||
oldUri: vscode.Uri,
|
||||
newUri: vscode.Uri,
|
||||
): void {
|
||||
const index = openFiles.findIndex((f) => f.path === oldUri.fsPath);
|
||||
if (index !== -1) {
|
||||
openFiles[index].path = newUri.fsPath;
|
||||
}
|
||||
}
|
||||
|
||||
export function deactivateCurrentActiveFile(openFiles: File[]): void {
|
||||
const currentActive = openFiles.find((f) => f.isActive);
|
||||
if (currentActive) {
|
||||
currentActive.isActive = false;
|
||||
currentActive.cursor = undefined;
|
||||
currentActive.selectedText = undefined;
|
||||
}
|
||||
}
|
||||
|
||||
export function enforceMaxFiles(openFiles: File[], maxFiles: number): void {
|
||||
if (openFiles.length > maxFiles) {
|
||||
openFiles.pop();
|
||||
}
|
||||
}
|
||||
|
||||
export function truncateSelectedText(
|
||||
selectedText: string | undefined,
|
||||
maxLength: number,
|
||||
): string | undefined {
|
||||
if (!selectedText) {
|
||||
return undefined;
|
||||
}
|
||||
if (selectedText.length > maxLength) {
|
||||
return selectedText.substring(0, maxLength) + '... [TRUNCATED]';
|
||||
}
|
||||
return selectedText;
|
||||
}
|
||||
|
||||
export function getNotebookUriFromCellUri(
|
||||
cellUri: vscode.Uri,
|
||||
): vscode.Uri | null {
|
||||
// Most efficient approach: Check if the currently active notebook editor contains this cell
|
||||
const activeNotebookEditor = vscode.window.activeNotebookEditor;
|
||||
if (
|
||||
activeNotebookEditor &&
|
||||
isNotebookFileUri(activeNotebookEditor.notebook.uri)
|
||||
) {
|
||||
for (let i = 0; i < activeNotebookEditor.notebook.cellCount; i++) {
|
||||
const cell = activeNotebookEditor.notebook.cellAt(i);
|
||||
if (cell.document.uri.toString() === cellUri.toString()) {
|
||||
return activeNotebookEditor.notebook.uri;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// If not in the active editor, check all visible notebook editors
|
||||
for (const editor of vscode.window.visibleNotebookEditors) {
|
||||
if (
|
||||
editor !== activeNotebookEditor &&
|
||||
isNotebookFileUri(editor.notebook.uri)
|
||||
) {
|
||||
for (let i = 0; i < editor.notebook.cellCount; i++) {
|
||||
const cell = editor.notebook.cellAt(i);
|
||||
if (cell.document.uri.toString() === cellUri.toString()) {
|
||||
return editor.notebook.uri;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return null;
|
||||
}
|
||||
@@ -1,8 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
/** Whether the current platform is Windows */
|
||||
export const isWindows = process.platform === 'win32';
|
||||
323
scripts/build_native.js
Normal file
323
scripts/build_native.js
Normal file
@@ -0,0 +1,323 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import fs from 'node:fs';
|
||||
import path from 'node:path';
|
||||
import { fileURLToPath } from 'node:url';
|
||||
import { spawnSync } from 'node:child_process';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = path.dirname(__filename);
|
||||
const rootDir = path.resolve(__dirname, '..');
|
||||
|
||||
const distRoot = path.join(rootDir, 'dist', 'native');
|
||||
const entryPoint = path.join(rootDir, 'packages', 'cli', 'index.ts');
|
||||
const localesDir = path.join(
|
||||
rootDir,
|
||||
'packages',
|
||||
'cli',
|
||||
'src',
|
||||
'i18n',
|
||||
'locales',
|
||||
);
|
||||
const vendorDir = path.join(rootDir, 'packages', 'core', 'vendor');
|
||||
|
||||
const rootPackageJson = JSON.parse(
|
||||
fs.readFileSync(path.join(rootDir, 'package.json'), 'utf-8'),
|
||||
);
|
||||
const cliName = Object.keys(rootPackageJson.bin || {})[0] || 'qwen';
|
||||
const version = rootPackageJson.version;
|
||||
|
||||
const TARGETS = [
|
||||
{
|
||||
id: 'darwin-arm64',
|
||||
os: 'darwin',
|
||||
arch: 'arm64',
|
||||
bunTarget: 'bun-darwin-arm64',
|
||||
},
|
||||
{
|
||||
id: 'darwin-x64',
|
||||
os: 'darwin',
|
||||
arch: 'x64',
|
||||
bunTarget: 'bun-darwin-x64',
|
||||
},
|
||||
{
|
||||
id: 'linux-arm64',
|
||||
os: 'linux',
|
||||
arch: 'arm64',
|
||||
bunTarget: 'bun-linux-arm64',
|
||||
},
|
||||
{
|
||||
id: 'linux-x64',
|
||||
os: 'linux',
|
||||
arch: 'x64',
|
||||
bunTarget: 'bun-linux-x64',
|
||||
},
|
||||
{
|
||||
id: 'linux-arm64-musl',
|
||||
os: 'linux',
|
||||
arch: 'arm64',
|
||||
libc: 'musl',
|
||||
bunTarget: 'bun-linux-arm64-musl',
|
||||
},
|
||||
{
|
||||
id: 'linux-x64-musl',
|
||||
os: 'linux',
|
||||
arch: 'x64',
|
||||
libc: 'musl',
|
||||
bunTarget: 'bun-linux-x64-musl',
|
||||
},
|
||||
{
|
||||
id: 'windows-x64',
|
||||
os: 'windows',
|
||||
arch: 'x64',
|
||||
bunTarget: 'bun-windows-x64',
|
||||
},
|
||||
];
|
||||
|
||||
function getHostTargetId() {
|
||||
const platform = process.platform;
|
||||
const arch = process.arch;
|
||||
if (platform === 'darwin' && arch === 'arm64') return 'darwin-arm64';
|
||||
if (platform === 'darwin' && arch === 'x64') return 'darwin-x64';
|
||||
if (platform === 'win32' && arch === 'x64') return 'windows-x64';
|
||||
if (platform === 'linux' && arch === 'x64') {
|
||||
return isMusl() ? 'linux-x64-musl' : 'linux-x64';
|
||||
}
|
||||
if (platform === 'linux' && arch === 'arm64') {
|
||||
return isMusl() ? 'linux-arm64-musl' : 'linux-arm64';
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
function isMusl() {
|
||||
if (process.platform !== 'linux') return false;
|
||||
const report = process.report?.getReport?.();
|
||||
return !report?.header?.glibcVersionRuntime;
|
||||
}
|
||||
|
||||
function parseArgs(argv) {
|
||||
const args = {
|
||||
all: false,
|
||||
list: false,
|
||||
targets: [],
|
||||
};
|
||||
|
||||
for (let i = 0; i < argv.length; i += 1) {
|
||||
const arg = argv[i];
|
||||
if (arg === '--all') {
|
||||
args.all = true;
|
||||
} else if (arg === '--list-targets') {
|
||||
args.list = true;
|
||||
} else if (arg === '--target' && argv[i + 1]) {
|
||||
args.targets.push(argv[i + 1]);
|
||||
i += 1;
|
||||
} else if (arg?.startsWith('--targets=')) {
|
||||
const raw = arg.split('=')[1] || '';
|
||||
args.targets.push(
|
||||
...raw
|
||||
.split(',')
|
||||
.map((value) => value.trim())
|
||||
.filter(Boolean),
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
return args;
|
||||
}
|
||||
|
||||
function ensureBunAvailable() {
|
||||
const result = spawnSync('bun', ['--version'], { stdio: 'pipe' });
|
||||
if (result.error) {
|
||||
console.error('Error: Bun is required to build native binaries.');
|
||||
console.error('Install Bun from https://bun.sh and retry.');
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
function cleanNativeDist() {
|
||||
fs.rmSync(distRoot, { recursive: true, force: true });
|
||||
fs.mkdirSync(distRoot, { recursive: true });
|
||||
}
|
||||
|
||||
function copyRecursiveSync(src, dest) {
|
||||
if (!fs.existsSync(src)) {
|
||||
return;
|
||||
}
|
||||
|
||||
const stats = fs.statSync(src);
|
||||
if (stats.isDirectory()) {
|
||||
if (!fs.existsSync(dest)) {
|
||||
fs.mkdirSync(dest, { recursive: true });
|
||||
}
|
||||
for (const entry of fs.readdirSync(src)) {
|
||||
if (entry === '.DS_Store') continue;
|
||||
copyRecursiveSync(path.join(src, entry), path.join(dest, entry));
|
||||
}
|
||||
} else {
|
||||
fs.copyFileSync(src, dest);
|
||||
if (stats.mode & 0o111) {
|
||||
fs.chmodSync(dest, stats.mode);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function copyNativeAssets(targetDir, target) {
|
||||
if (target.os === 'darwin') {
|
||||
const sbFiles = findSandboxProfiles();
|
||||
for (const file of sbFiles) {
|
||||
fs.copyFileSync(file, path.join(targetDir, path.basename(file)));
|
||||
}
|
||||
}
|
||||
|
||||
copyVendorRipgrep(targetDir, target);
|
||||
copyRecursiveSync(localesDir, path.join(targetDir, 'locales'));
|
||||
}
|
||||
|
||||
function findSandboxProfiles() {
|
||||
const matches = [];
|
||||
const packagesDir = path.join(rootDir, 'packages');
|
||||
const stack = [packagesDir];
|
||||
|
||||
while (stack.length) {
|
||||
const current = stack.pop();
|
||||
if (!current) break;
|
||||
const entries = fs.readdirSync(current, { withFileTypes: true });
|
||||
for (const entry of entries) {
|
||||
const entryPath = path.join(current, entry.name);
|
||||
if (entry.isDirectory()) {
|
||||
stack.push(entryPath);
|
||||
} else if (entry.isFile() && entry.name.endsWith('.sb')) {
|
||||
matches.push(entryPath);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return matches;
|
||||
}
|
||||
|
||||
function copyVendorRipgrep(targetDir, target) {
|
||||
if (!fs.existsSync(vendorDir)) {
|
||||
console.warn(`Warning: Vendor directory not found at ${vendorDir}`);
|
||||
return;
|
||||
}
|
||||
|
||||
const vendorRipgrepDir = path.join(vendorDir, 'ripgrep');
|
||||
if (!fs.existsSync(vendorRipgrepDir)) {
|
||||
console.warn(`Warning: ripgrep directory not found at ${vendorRipgrepDir}`);
|
||||
return;
|
||||
}
|
||||
|
||||
const platform = target.os === 'windows' ? 'win32' : target.os;
|
||||
const ripgrepTargetDir = path.join(
|
||||
vendorRipgrepDir,
|
||||
`${target.arch}-${platform}`,
|
||||
);
|
||||
if (!fs.existsSync(ripgrepTargetDir)) {
|
||||
console.warn(`Warning: ripgrep binaries not found at ${ripgrepTargetDir}`);
|
||||
return;
|
||||
}
|
||||
|
||||
const destVendorRoot = path.join(targetDir, 'vendor');
|
||||
const destRipgrepDir = path.join(destVendorRoot, 'ripgrep');
|
||||
fs.mkdirSync(destRipgrepDir, { recursive: true });
|
||||
|
||||
const copyingFile = path.join(vendorRipgrepDir, 'COPYING');
|
||||
if (fs.existsSync(copyingFile)) {
|
||||
fs.copyFileSync(copyingFile, path.join(destRipgrepDir, 'COPYING'));
|
||||
}
|
||||
|
||||
copyRecursiveSync(
|
||||
ripgrepTargetDir,
|
||||
path.join(destRipgrepDir, path.basename(ripgrepTargetDir)),
|
||||
);
|
||||
}
|
||||
|
||||
function buildTarget(target) {
|
||||
const outputName = `${cliName}-${target.id}`;
|
||||
const targetDir = path.join(distRoot, outputName);
|
||||
const binDir = path.join(targetDir, 'bin');
|
||||
const binaryName = target.os === 'windows' ? `${cliName}.exe` : cliName;
|
||||
|
||||
fs.mkdirSync(binDir, { recursive: true });
|
||||
|
||||
const buildArgs = [
|
||||
'build',
|
||||
'--compile',
|
||||
'--target',
|
||||
target.bunTarget,
|
||||
entryPoint,
|
||||
'--outfile',
|
||||
path.join(binDir, binaryName),
|
||||
];
|
||||
|
||||
const result = spawnSync('bun', buildArgs, { stdio: 'inherit' });
|
||||
if (result.status !== 0) {
|
||||
throw new Error(`Bun build failed for ${target.id}`);
|
||||
}
|
||||
|
||||
const packageJson = {
|
||||
name: outputName,
|
||||
version,
|
||||
os: [target.os === 'windows' ? 'win32' : target.os],
|
||||
cpu: [target.arch],
|
||||
};
|
||||
|
||||
fs.writeFileSync(
|
||||
path.join(targetDir, 'package.json'),
|
||||
JSON.stringify(packageJson, null, 2) + '\n',
|
||||
);
|
||||
|
||||
copyNativeAssets(targetDir, target);
|
||||
}
|
||||
|
||||
function main() {
|
||||
if (!fs.existsSync(entryPoint)) {
|
||||
console.error(`Entry point not found at ${entryPoint}`);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const args = parseArgs(process.argv.slice(2));
|
||||
if (args.list) {
|
||||
console.log(TARGETS.map((target) => target.id).join('\n'));
|
||||
return;
|
||||
}
|
||||
|
||||
ensureBunAvailable();
|
||||
cleanNativeDist();
|
||||
|
||||
let selectedTargets = [];
|
||||
if (args.all) {
|
||||
selectedTargets = TARGETS;
|
||||
} else if (args.targets.length > 0) {
|
||||
selectedTargets = TARGETS.filter((target) =>
|
||||
args.targets.includes(target.id),
|
||||
);
|
||||
} else {
|
||||
const hostTargetId = getHostTargetId();
|
||||
if (!hostTargetId) {
|
||||
console.error(
|
||||
`Unsupported host platform/arch: ${process.platform}/${process.arch}`,
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
selectedTargets = TARGETS.filter((target) => target.id === hostTargetId);
|
||||
}
|
||||
|
||||
if (selectedTargets.length === 0) {
|
||||
console.error('No matching targets selected.');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
for (const target of selectedTargets) {
|
||||
console.log(`\nBuilding native binary for ${target.id}...`);
|
||||
buildTarget(target);
|
||||
}
|
||||
|
||||
console.log('\n✅ Native build complete.');
|
||||
}
|
||||
|
||||
main();
|
||||
251
standalone-release.md
Normal file
251
standalone-release.md
Normal file
@@ -0,0 +1,251 @@
|
||||
# Standalone Release Spec (Bun Native + npm Fallback)
|
||||
|
||||
This document describes the target release design for shipping Qwen Code as native
|
||||
binaries built with Bun, while retaining the existing npm JS bundle as a fallback
|
||||
distribution. It is written as a migration-ready spec that bridges the current
|
||||
release pipeline to the future dual-release system.
|
||||
|
||||
## Goal
|
||||
|
||||
Provide a CLI that:
|
||||
|
||||
- Runs as a standalone binary on Linux/macOS/Windows without requiring Node or Bun.
|
||||
- Retains npm installation (global/local) as a JS-only fallback.
|
||||
- Supports a curl installer that pulls the correct binary from GitHub Releases.
|
||||
- Ships multiple variants (x64/arm64, musl/glibc where needed).
|
||||
- Uses one release flow to produce all artifacts with a single tag/version.
|
||||
|
||||
## Non-Goals
|
||||
|
||||
- Replacing npm as a dev-time dependency manager.
|
||||
- Shipping a single universal binary for all platforms.
|
||||
- Supporting every architecture or OS outside the defined target matrix.
|
||||
- Removing the existing Node/esbuild bundle.
|
||||
|
||||
## Current State (Baseline)
|
||||
|
||||
The current release pipeline:
|
||||
|
||||
- Bundles the CLI into `dist/cli.js` via esbuild.
|
||||
- Uses `scripts/prepare-package.js` to create `dist/package.json`,
|
||||
plus `vendor/`, `locales/`, and `*.sb` assets.
|
||||
- Publishes `dist/` to npm as the primary distribution.
|
||||
- Creates a GitHub Release and attaches only `dist/cli.js`.
|
||||
- Uses `release.yml` for nightly/preview schedules and manual stable releases.
|
||||
|
||||
This spec extends the above pipeline; it does not replace it until the migration
|
||||
phases complete.
|
||||
|
||||
## Target Architecture
|
||||
|
||||
### 1) Build Outputs
|
||||
|
||||
There are two build outputs:
|
||||
|
||||
1. Native binaries (Bun compile) for a target matrix.
|
||||
2. Node-compatible JS bundle for npm fallback (existing `dist/` output).
|
||||
|
||||
Native build output for each target:
|
||||
|
||||
- dist/<name>/bin/<cli> (or .exe on Windows)
|
||||
- dist/<name>/package.json (minimal package metadata)
|
||||
|
||||
Name encodes target:
|
||||
|
||||
- <cli>-linux-x64
|
||||
- <cli>-linux-x64-musl
|
||||
- <cli>-linux-arm64
|
||||
- <cli>-linux-arm64-musl
|
||||
- <cli>-darwin-arm64
|
||||
- <cli>-darwin-x64
|
||||
- <cli>-windows-x64
|
||||
|
||||
### 2) npm Distribution (JS Fallback)
|
||||
|
||||
Keep npm as a pure JS/TS CLI package that runs under Node/Bun. Do not ship or
|
||||
auto-install native binaries through npm.
|
||||
|
||||
Implications:
|
||||
|
||||
- npm install always uses the JS implementation.
|
||||
- No optionalDependencies for platform binaries.
|
||||
- No postinstall symlink logic.
|
||||
- No node shim that searches for a native binary.
|
||||
|
||||
### 3) GitHub Release Distribution (Primary)
|
||||
|
||||
Native binaries are distributed only via GitHub Releases and the curl installer:
|
||||
|
||||
- Archive each platform binary into a tar.gz (Linux) or zip (macOS/Windows).
|
||||
- Attach archives to the GitHub Release.
|
||||
- Provide a shell installer that detects target and downloads the correct archive.
|
||||
|
||||
## Detailed Implementation
|
||||
|
||||
### A) Target Matrix
|
||||
|
||||
Define a target matrix that includes OS, arch, and libc variants.
|
||||
|
||||
Target list (fixed set):
|
||||
|
||||
- darwin arm64
|
||||
- darwin x64
|
||||
- linux arm64 (glibc)
|
||||
- linux x64 (glibc)
|
||||
- linux arm64 musl
|
||||
- linux x64 musl
|
||||
- win32 x64
|
||||
|
||||
### B) Build Scripts
|
||||
|
||||
1. Native build script (new, e.g. `scripts/build-native.ts`)
|
||||
Responsibilities:
|
||||
|
||||
- Remove native build output directory (keep npm `dist/` intact).
|
||||
- For each target:
|
||||
- Compute a target name.
|
||||
- Compile using `Bun.build({ compile: { target: ... } })`.
|
||||
- Write the binary to `dist/<name>/bin/<cli>`.
|
||||
- Write a minimal `package.json` into `dist/<name>/`.
|
||||
|
||||
2. npm fallback build (existing)
|
||||
Responsibilities:
|
||||
|
||||
- `npm run bundle` produces `dist/cli.js`.
|
||||
- `npm run prepare:package` creates `dist/package.json` and copies assets.
|
||||
|
||||
Key details:
|
||||
|
||||
- Use Bun.build with compile.target = <bun-target> (e.g. bun-linux-x64).
|
||||
- Include any extra worker/runtime files in entrypoints.
|
||||
- Use define or execArgv to inject version/channel metadata.
|
||||
- Use "windows" in archive naming even though the OS is "win32" internally.
|
||||
|
||||
Build-time considerations:
|
||||
|
||||
- Preinstall platform-specific native deps for bundling (example: bun install --os="_" --cpu="_" for dependencies with native bindings).
|
||||
- Include worker assets in the compile entrypoints and embed their paths via define constants.
|
||||
- Use platform-specific bunfs root paths when resolving embedded worker files.
|
||||
- Set runtime execArgv flags for user-agent/version and system CA usage.
|
||||
|
||||
Target name example:
|
||||
<cli>-<os>-<arch>[-musl]
|
||||
|
||||
Minimal package.json example:
|
||||
{
|
||||
"name": "<cli>-linux-x64",
|
||||
"version": "<version>",
|
||||
"os": ["linux"],
|
||||
"cpu": ["x64"]
|
||||
}
|
||||
|
||||
### C) Publish Script (new, optional)
|
||||
|
||||
Responsibilities:
|
||||
|
||||
1. Run the native build script.
|
||||
2. Smoke test a local binary (`dist/<host>/bin/<cli> --version`).
|
||||
3. Create GitHub Release archives.
|
||||
4. Optionally build and push Docker image.
|
||||
5. Publish npm package (JS-only fallback) as a separate step or pipeline.
|
||||
|
||||
Note: npm publishing is now independent of native binary publishing. It should not reference platform binaries.
|
||||
|
||||
### D) GitHub Release Installer (install)
|
||||
|
||||
A bash installer that:
|
||||
|
||||
1. Detects OS and arch.
|
||||
2. Handles Rosetta (macOS) and musl detection (Alpine, ldd).
|
||||
3. Builds target name and downloads from GitHub Releases.
|
||||
4. Extracts to ~/.<cli>/bin.
|
||||
5. Adds PATH unless --no-modify-path.
|
||||
|
||||
Supports:
|
||||
|
||||
- --version <version>
|
||||
- --binary <path>
|
||||
- --no-modify-path
|
||||
|
||||
Installer details to include:
|
||||
|
||||
- Require tar for Linux and unzip for macOS/Windows archives.
|
||||
- Use "windows" in asset naming, not "win32".
|
||||
- Prefer arm64 when macOS is running under Rosetta.
|
||||
|
||||
## CI/CD Flow (Dual Pipeline)
|
||||
|
||||
Release pipeline (native binaries):
|
||||
|
||||
1. Bump version.
|
||||
2. Build binaries for the full target matrix.
|
||||
3. Smoke test the host binary.
|
||||
4. Create GitHub release assets.
|
||||
5. Mark release as final (if draft).
|
||||
|
||||
Release pipeline (npm fallback):
|
||||
|
||||
1. Bump version (same tag).
|
||||
2. Publish the JS-only npm package.
|
||||
|
||||
Release orchestration details to consider:
|
||||
|
||||
- Update all package.json version fields in the repo.
|
||||
- Update any extension metadata or download URLs that embed version strings.
|
||||
- Tag the release and create a GitHub Release draft that includes the binary assets.
|
||||
|
||||
### Workflow Mapping to Current Code
|
||||
|
||||
The existing `release.yml` workflow remains the orchestrator:
|
||||
|
||||
- Use `scripts/get-release-version.js` for version/tag selection.
|
||||
- Keep tests and integration checks as-is.
|
||||
- Add a native build matrix job that produces archives and uploads them to
|
||||
the GitHub Release.
|
||||
- Keep the npm publish step from `dist/` as the fallback.
|
||||
- Ensure the same `RELEASE_TAG` is used for both native and npm outputs.
|
||||
|
||||
## Edge Cases and Pitfalls
|
||||
|
||||
- musl: Alpine requires musl binaries.
|
||||
- Rosetta: macOS under Rosetta should prefer arm64 when available.
|
||||
- npm fallback: ensure JS implementation is functional without native helpers.
|
||||
- Path precedence: binary install should appear earlier in PATH than npm global bin if you want native to win by default.
|
||||
- Archive prerequisites: users need tar/unzip depending on OS.
|
||||
|
||||
## Testing Plan
|
||||
|
||||
- Build all targets in CI.
|
||||
- Run dist/<host>/bin/<cli> --version.
|
||||
- npm install locally and verify CLI invocation.
|
||||
- Run installer script on each OS or VM.
|
||||
- Validate musl builds on Alpine.
|
||||
|
||||
## Migration Plan
|
||||
|
||||
Phase 1: Add native builds without changing npm
|
||||
|
||||
- [ ] Define target matrix with musl variants.
|
||||
- [ ] Add native build script for Bun compile per target.
|
||||
- [ ] Generate per-target package.json.
|
||||
- [ ] Produce per-target archives and upload to GitHub Releases.
|
||||
- [ ] Keep existing npm bundle publish unchanged.
|
||||
|
||||
Phase 2: Installer and docs
|
||||
|
||||
- [ ] Add curl installer for GitHub Releases.
|
||||
- [ ] Document recommended install paths (native first).
|
||||
- [ ] Add smoke tests for installer output.
|
||||
|
||||
Phase 3: Default install guidance and cleanup
|
||||
|
||||
- [ ] Update docs to recommend native install where possible.
|
||||
- [ ] Decide whether npm stays equal or fallback-only in user docs.
|
||||
|
||||
## Implementation Checklist
|
||||
|
||||
- [ ] Keep `npm run bundle` + `npm run prepare:package` for JS fallback.
|
||||
- [ ] Add `scripts/build-native.ts` for Bun compile targets.
|
||||
- [ ] Add archive creation and asset upload in `release.yml`.
|
||||
- [ ] Add an installer script with OS/arch/musl detection.
|
||||
- [ ] Ensure tag/version parity across native and npm releases.
|
||||
Reference in New Issue
Block a user