mirror of
https://github.com/QwenLM/qwen-code.git
synced 2026-01-10 10:59:14 +00:00
Compare commits
42 Commits
release/v0
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
bde31d1261 | ||
|
|
cba9c424eb | ||
|
|
6714f9ce3c | ||
|
|
155d1f9518 | ||
|
|
f776075aa8 | ||
|
|
36c142951a | ||
|
|
2b511d0b83 | ||
|
|
85bc0833b4 | ||
|
|
2662639280 | ||
|
|
b7ac94ecf6 | ||
|
|
be8259b218 | ||
|
|
ca4c36f233 | ||
|
|
f41308f34c | ||
|
|
0a33510304 | ||
|
|
82cbdee3b4 | ||
|
|
81de79c899 | ||
|
|
f6a753cf78 | ||
|
|
509d304742 | ||
|
|
6319a6ed56 | ||
|
|
ab07c2d89c | ||
|
|
5ea841dd02 | ||
|
|
ded1ebcdff | ||
|
|
afe6ba255e | ||
|
|
fe2ed889b9 | ||
|
|
8da376637a | ||
|
|
15f4c1ebd6 | ||
|
|
492da0c8c0 | ||
|
|
90855c93d1 | ||
|
|
db12796df5 | ||
|
|
aa9cdf2a3c | ||
|
|
0a0ab64da0 | ||
|
|
8a15017593 | ||
|
|
4d54a231b3 | ||
|
|
0f1cb162c9 | ||
|
|
3d059b71de | ||
|
|
87dc618a21 | ||
|
|
94a5d828bd | ||
|
|
fd41309ed2 | ||
|
|
48bc0f35d7 | ||
|
|
e30c2dbe23 | ||
|
|
e9204ecba9 | ||
|
|
f24bda3d7b |
@@ -136,6 +136,95 @@ Settings are organized into categories. All settings should be placed within the
|
||||
- `"./custom-logs"` - Logs to `./custom-logs` relative to current directory
|
||||
- `"/tmp/openai-logs"` - Logs to absolute path `/tmp/openai-logs`
|
||||
|
||||
#### modelProviders
|
||||
|
||||
Use `modelProviders` to declare curated model lists per auth type that the `/model` picker can switch between. Keys must be valid auth types (`openai`, `anthropic`, `gemini`, `vertex-ai`, etc.). Each entry requires an `id` and **must include `envKey`**, with optional `name`, `description`, `baseUrl`, and `generationConfig`. Credentials are never persisted in settings; the runtime reads them from `process.env[envKey]`. Qwen OAuth models remain hard-coded and cannot be overridden.
|
||||
|
||||
##### Example
|
||||
|
||||
```json
|
||||
{
|
||||
"modelProviders": {
|
||||
"openai": [
|
||||
{
|
||||
"id": "gpt-4o",
|
||||
"name": "GPT-4o",
|
||||
"envKey": "OPENAI_API_KEY",
|
||||
"baseUrl": "https://api.openai.com/v1",
|
||||
"generationConfig": {
|
||||
"timeout": 60000,
|
||||
"maxRetries": 3,
|
||||
"samplingParams": { "temperature": 0.2 }
|
||||
}
|
||||
}
|
||||
],
|
||||
"anthropic": [
|
||||
{
|
||||
"id": "claude-3-5-sonnet",
|
||||
"envKey": "ANTHROPIC_API_KEY",
|
||||
"baseUrl": "https://api.anthropic.com/v1"
|
||||
}
|
||||
],
|
||||
"gemini": [
|
||||
{
|
||||
"id": "gemini-2.0-flash",
|
||||
"name": "Gemini 2.0 Flash",
|
||||
"envKey": "GEMINI_API_KEY",
|
||||
"baseUrl": "https://generativelanguage.googleapis.com"
|
||||
}
|
||||
],
|
||||
"vertex-ai": [
|
||||
{
|
||||
"id": "gemini-1.5-pro-vertex",
|
||||
"envKey": "GOOGLE_API_KEY",
|
||||
"baseUrl": "https://generativelanguage.googleapis.com"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
> [!note]
|
||||
> Only the `/model` command exposes non-default auth types. Anthropic, Gemini, Vertex AI, etc., must be defined via `modelProviders`. The `/auth` command intentionally lists only the built-in Qwen OAuth and OpenAI flows.
|
||||
|
||||
##### Resolution layers and atomicity
|
||||
|
||||
The effective auth/model/credential values are chosen per field using the following precedence (first present wins). You can combine `--auth-type` with `--model` to point directly at a provider entry; these CLI flags run before other layers.
|
||||
|
||||
| Layer (highest → lowest) | authType | model | apiKey | baseUrl | apiKeyEnvKey | proxy |
|
||||
| -------------------------- | ----------------------------------- | ----------------------------------------------- | --------------------------------------------------- | ---------------------------------------------------- | ---------------------- | --------------------------------- |
|
||||
| Programmatic overrides | `/auth ` | `/auth` input | `/auth` input | `/auth` input | — | — |
|
||||
| Model provider selection | — | `modelProvider.id` | `env[modelProvider.envKey]` | `modelProvider.baseUrl` | `modelProvider.envKey` | — |
|
||||
| CLI arguments | `--auth-type` | `--model` | `--openaiApiKey` (or provider-specific equivalents) | `--openaiBaseUrl` (or provider-specific equivalents) | — | — |
|
||||
| Environment variables | — | Provider-specific mapping (e.g. `OPENAI_MODEL`) | Provider-specific mapping (e.g. `OPENAI_API_KEY`) | Provider-specific mapping (e.g. `OPENAI_BASE_URL`) | — | — |
|
||||
| Settings (`settings.json`) | `security.auth.selectedType` | `model.name` | `security.auth.apiKey` | `security.auth.baseUrl` | — | — |
|
||||
| Default / computed | Falls back to `AuthType.QWEN_OAUTH` | Built-in default (OpenAI ⇒ `qwen3-coder-plus`) | — | — | — | `Config.getProxy()` if configured |
|
||||
|
||||
\*When present, CLI auth flags override settings. Otherwise, `security.auth.selectedType` or the implicit default determine the auth type. Qwen OAuth and OpenAI are the only auth types surfaced without extra configuration.
|
||||
|
||||
Model-provider sourced values are applied atomically: once a provider model is active, every field it defines is protected from lower layers until you manually clear credentials via `/auth`. The final `generationConfig` is the projection across all layers—lower layers only fill gaps left by higher ones, and the provider layer remains impenetrable.
|
||||
|
||||
The merge strategy for `modelProviders` is REPLACE: the entire `modelProviders` from project settings will override the corresponding section in user settings, rather than merging the two.
|
||||
|
||||
##### Generation config layering
|
||||
|
||||
Per-field precedence for `generationConfig`:
|
||||
|
||||
1. Programmatic overrides (e.g. runtime `/model`, `/auth` changes)
|
||||
2. `modelProviders[authType][].generationConfig`
|
||||
3. `settings.model.generationConfig`
|
||||
4. Content-generator defaults (`getDefaultGenerationConfig` for OpenAI, `getParameterValue` for Gemini, etc.)
|
||||
|
||||
`samplingParams` is treated atomically; provider values replace the entire object. Defaults from the content generator apply last so each provider retains its tuned baseline.
|
||||
|
||||
##### Selection persistence and recommendations
|
||||
|
||||
> [!important]
|
||||
> Define `modelProviders` in the user-scope `~/.qwen/settings.json` whenever possible and avoid persisting credential overrides in any scope. Keeping the provider catalog in user settings prevents merge/override conflicts between project and user scopes and ensures `/auth` and `/model` updates always write back to a consistent scope.
|
||||
|
||||
- `/model` and `/auth` persist `model.name` (where applicable) and `security.auth.selectedType` to the closest writable scope that already defines `modelProviders`; otherwise they fall back to the user scope. This keeps workspace/user files in sync with the active provider catalog.
|
||||
- Without `modelProviders`, the resolver mixes CLI/env/settings layers, which is fine for single-provider setups but cumbersome when frequently switching. Define provider catalogs whenever multi-model workflows are common so that switches stay atomic, source-attributed, and debuggable.
|
||||
|
||||
#### context
|
||||
|
||||
| Setting | Type | Description | Default |
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
# Qwen Code overview
|
||||
[](https://npm-compare.com/@qwen-code/qwen-code)
|
||||
|
||||
[](https://npm-compare.com/@qwen-code/qwen-code)
|
||||
[](https://www.npmjs.com/package/@qwen-code/qwen-code)
|
||||
|
||||
> Learn about Qwen Code, Qwen's agentic coding tool that lives in your terminal and helps you turn ideas into code faster than ever before.
|
||||
|
||||
12
package-lock.json
generated
12
package-lock.json
generated
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "@qwen-code/qwen-code",
|
||||
"version": "0.6.1-nightly.20260108.570ec432",
|
||||
"version": "0.7.0",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "@qwen-code/qwen-code",
|
||||
"version": "0.6.1-nightly.20260108.570ec432",
|
||||
"version": "0.7.0",
|
||||
"workspaces": [
|
||||
"packages/*"
|
||||
],
|
||||
@@ -17316,7 +17316,7 @@
|
||||
},
|
||||
"packages/cli": {
|
||||
"name": "@qwen-code/qwen-code",
|
||||
"version": "0.6.1-nightly.20260108.570ec432",
|
||||
"version": "0.7.0",
|
||||
"dependencies": {
|
||||
"@google/genai": "1.30.0",
|
||||
"@iarna/toml": "^2.2.5",
|
||||
@@ -17953,7 +17953,7 @@
|
||||
},
|
||||
"packages/core": {
|
||||
"name": "@qwen-code/qwen-code-core",
|
||||
"version": "0.6.1-nightly.20260108.570ec432",
|
||||
"version": "0.7.0",
|
||||
"hasInstallScript": true,
|
||||
"dependencies": {
|
||||
"@anthropic-ai/sdk": "^0.36.1",
|
||||
@@ -21413,7 +21413,7 @@
|
||||
},
|
||||
"packages/test-utils": {
|
||||
"name": "@qwen-code/qwen-code-test-utils",
|
||||
"version": "0.6.1-nightly.20260108.570ec432",
|
||||
"version": "0.7.0",
|
||||
"dev": true,
|
||||
"license": "Apache-2.0",
|
||||
"devDependencies": {
|
||||
@@ -21425,7 +21425,7 @@
|
||||
},
|
||||
"packages/vscode-ide-companion": {
|
||||
"name": "qwen-code-vscode-ide-companion",
|
||||
"version": "0.6.1-nightly.20260108.570ec432",
|
||||
"version": "0.7.0",
|
||||
"license": "LICENSE",
|
||||
"dependencies": {
|
||||
"@modelcontextprotocol/sdk": "^1.25.1",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@qwen-code/qwen-code",
|
||||
"version": "0.6.1-nightly.20260108.570ec432",
|
||||
"version": "0.7.0",
|
||||
"engines": {
|
||||
"node": ">=20.0.0"
|
||||
},
|
||||
@@ -13,7 +13,7 @@
|
||||
"url": "git+https://github.com/QwenLM/qwen-code.git"
|
||||
},
|
||||
"config": {
|
||||
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.6.1-nightly.20260108.570ec432"
|
||||
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.7.0"
|
||||
},
|
||||
"scripts": {
|
||||
"start": "cross-env node scripts/start.js",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@qwen-code/qwen-code",
|
||||
"version": "0.6.1-nightly.20260108.570ec432",
|
||||
"version": "0.7.0",
|
||||
"description": "Qwen Code",
|
||||
"repository": {
|
||||
"type": "git",
|
||||
@@ -33,7 +33,7 @@
|
||||
"dist"
|
||||
],
|
||||
"config": {
|
||||
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.6.1-nightly.20260108.570ec432"
|
||||
"sandboxImageUri": "ghcr.io/qwenlm/qwen-code:0.7.0"
|
||||
},
|
||||
"dependencies": {
|
||||
"@google/genai": "1.30.0",
|
||||
|
||||
@@ -1,41 +1,112 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Google LLC
|
||||
* Copyright 2025 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { AuthType } from '@qwen-code/qwen-code-core';
|
||||
import { vi } from 'vitest';
|
||||
import { validateAuthMethod } from './auth.js';
|
||||
import * as settings from './settings.js';
|
||||
|
||||
vi.mock('./settings.js', () => ({
|
||||
loadEnvironment: vi.fn(),
|
||||
loadSettings: vi.fn().mockReturnValue({
|
||||
merged: vi.fn().mockReturnValue({}),
|
||||
merged: {},
|
||||
}),
|
||||
}));
|
||||
|
||||
describe('validateAuthMethod', () => {
|
||||
beforeEach(() => {
|
||||
vi.resetModules();
|
||||
// Reset mock to default
|
||||
vi.mocked(settings.loadSettings).mockReturnValue({
|
||||
merged: {},
|
||||
} as ReturnType<typeof settings.loadSettings>);
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
vi.unstubAllEnvs();
|
||||
delete process.env['OPENAI_API_KEY'];
|
||||
delete process.env['CUSTOM_API_KEY'];
|
||||
delete process.env['GEMINI_API_KEY'];
|
||||
delete process.env['GEMINI_API_KEY_ALTERED'];
|
||||
delete process.env['ANTHROPIC_API_KEY'];
|
||||
delete process.env['ANTHROPIC_BASE_URL'];
|
||||
delete process.env['GOOGLE_API_KEY'];
|
||||
});
|
||||
|
||||
it('should return null for USE_OPENAI', () => {
|
||||
it('should return null for USE_OPENAI with default env key', () => {
|
||||
process.env['OPENAI_API_KEY'] = 'fake-key';
|
||||
expect(validateAuthMethod(AuthType.USE_OPENAI)).toBeNull();
|
||||
});
|
||||
|
||||
it('should return an error message for USE_OPENAI if OPENAI_API_KEY is not set', () => {
|
||||
delete process.env['OPENAI_API_KEY'];
|
||||
it('should return an error message for USE_OPENAI if no API key is available', () => {
|
||||
expect(validateAuthMethod(AuthType.USE_OPENAI)).toBe(
|
||||
'OPENAI_API_KEY environment variable not found. You can enter it interactively or add it to your .env file.',
|
||||
"Missing API key for OpenAI-compatible auth. Set settings.security.auth.apiKey, or set the 'OPENAI_API_KEY' environment variable.",
|
||||
);
|
||||
});
|
||||
|
||||
it('should return null for USE_OPENAI with custom envKey from modelProviders', () => {
|
||||
vi.mocked(settings.loadSettings).mockReturnValue({
|
||||
merged: {
|
||||
model: { name: 'custom-model' },
|
||||
modelProviders: {
|
||||
openai: [{ id: 'custom-model', envKey: 'CUSTOM_API_KEY' }],
|
||||
},
|
||||
},
|
||||
} as unknown as ReturnType<typeof settings.loadSettings>);
|
||||
process.env['CUSTOM_API_KEY'] = 'custom-key';
|
||||
|
||||
expect(validateAuthMethod(AuthType.USE_OPENAI)).toBeNull();
|
||||
});
|
||||
|
||||
it('should return error with custom envKey hint when modelProviders envKey is set but env var is missing', () => {
|
||||
vi.mocked(settings.loadSettings).mockReturnValue({
|
||||
merged: {
|
||||
model: { name: 'custom-model' },
|
||||
modelProviders: {
|
||||
openai: [{ id: 'custom-model', envKey: 'CUSTOM_API_KEY' }],
|
||||
},
|
||||
},
|
||||
} as unknown as ReturnType<typeof settings.loadSettings>);
|
||||
|
||||
const result = validateAuthMethod(AuthType.USE_OPENAI);
|
||||
expect(result).toContain('CUSTOM_API_KEY');
|
||||
});
|
||||
|
||||
it('should return null for USE_GEMINI with custom envKey', () => {
|
||||
vi.mocked(settings.loadSettings).mockReturnValue({
|
||||
merged: {
|
||||
model: { name: 'gemini-1.5-flash' },
|
||||
modelProviders: {
|
||||
gemini: [
|
||||
{ id: 'gemini-1.5-flash', envKey: 'GEMINI_API_KEY_ALTERED' },
|
||||
],
|
||||
},
|
||||
},
|
||||
} as unknown as ReturnType<typeof settings.loadSettings>);
|
||||
process.env['GEMINI_API_KEY_ALTERED'] = 'altered-key';
|
||||
|
||||
expect(validateAuthMethod(AuthType.USE_GEMINI)).toBeNull();
|
||||
});
|
||||
|
||||
it('should return error with custom envKey for USE_GEMINI when env var is missing', () => {
|
||||
vi.mocked(settings.loadSettings).mockReturnValue({
|
||||
merged: {
|
||||
model: { name: 'gemini-1.5-flash' },
|
||||
modelProviders: {
|
||||
gemini: [
|
||||
{ id: 'gemini-1.5-flash', envKey: 'GEMINI_API_KEY_ALTERED' },
|
||||
],
|
||||
},
|
||||
},
|
||||
} as unknown as ReturnType<typeof settings.loadSettings>);
|
||||
|
||||
const result = validateAuthMethod(AuthType.USE_GEMINI);
|
||||
expect(result).toContain('GEMINI_API_KEY_ALTERED');
|
||||
});
|
||||
|
||||
it('should return null for QWEN_OAUTH', () => {
|
||||
expect(validateAuthMethod(AuthType.QWEN_OAUTH)).toBeNull();
|
||||
});
|
||||
@@ -45,4 +116,115 @@ describe('validateAuthMethod', () => {
|
||||
'Invalid auth method selected.',
|
||||
);
|
||||
});
|
||||
|
||||
it('should return null for USE_ANTHROPIC with custom envKey and baseUrl', () => {
|
||||
vi.mocked(settings.loadSettings).mockReturnValue({
|
||||
merged: {
|
||||
model: { name: 'claude-3' },
|
||||
modelProviders: {
|
||||
anthropic: [
|
||||
{
|
||||
id: 'claude-3',
|
||||
envKey: 'CUSTOM_ANTHROPIC_KEY',
|
||||
baseUrl: 'https://api.anthropic.com',
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
} as unknown as ReturnType<typeof settings.loadSettings>);
|
||||
process.env['CUSTOM_ANTHROPIC_KEY'] = 'custom-anthropic-key';
|
||||
|
||||
expect(validateAuthMethod(AuthType.USE_ANTHROPIC)).toBeNull();
|
||||
});
|
||||
|
||||
it('should return error for USE_ANTHROPIC when baseUrl is missing', () => {
|
||||
vi.mocked(settings.loadSettings).mockReturnValue({
|
||||
merged: {
|
||||
model: { name: 'claude-3' },
|
||||
modelProviders: {
|
||||
anthropic: [{ id: 'claude-3', envKey: 'CUSTOM_ANTHROPIC_KEY' }],
|
||||
},
|
||||
},
|
||||
} as unknown as ReturnType<typeof settings.loadSettings>);
|
||||
process.env['CUSTOM_ANTHROPIC_KEY'] = 'custom-key';
|
||||
|
||||
const result = validateAuthMethod(AuthType.USE_ANTHROPIC);
|
||||
expect(result).toContain('modelProviders[].baseUrl');
|
||||
});
|
||||
|
||||
it('should return null for USE_VERTEX_AI with custom envKey', () => {
|
||||
vi.mocked(settings.loadSettings).mockReturnValue({
|
||||
merged: {
|
||||
model: { name: 'vertex-model' },
|
||||
modelProviders: {
|
||||
'vertex-ai': [
|
||||
{ id: 'vertex-model', envKey: 'GOOGLE_API_KEY_VERTEX' },
|
||||
],
|
||||
},
|
||||
},
|
||||
} as unknown as ReturnType<typeof settings.loadSettings>);
|
||||
process.env['GOOGLE_API_KEY_VERTEX'] = 'vertex-key';
|
||||
|
||||
expect(validateAuthMethod(AuthType.USE_VERTEX_AI)).toBeNull();
|
||||
});
|
||||
|
||||
it('should use config.modelsConfig.getModel() when Config is provided', () => {
|
||||
// Settings has a different model
|
||||
vi.mocked(settings.loadSettings).mockReturnValue({
|
||||
merged: {
|
||||
model: { name: 'settings-model' },
|
||||
modelProviders: {
|
||||
openai: [
|
||||
{ id: 'settings-model', envKey: 'SETTINGS_API_KEY' },
|
||||
{ id: 'cli-model', envKey: 'CLI_API_KEY' },
|
||||
],
|
||||
},
|
||||
},
|
||||
} as unknown as ReturnType<typeof settings.loadSettings>);
|
||||
|
||||
// Mock Config object that returns a different model (e.g., from CLI args)
|
||||
const mockConfig = {
|
||||
modelsConfig: {
|
||||
getModel: vi.fn().mockReturnValue('cli-model'),
|
||||
},
|
||||
} as unknown as import('@qwen-code/qwen-code-core').Config;
|
||||
|
||||
// Set the env key for the CLI model, not the settings model
|
||||
process.env['CLI_API_KEY'] = 'cli-key';
|
||||
|
||||
// Should use 'cli-model' from config.modelsConfig.getModel(), not 'settings-model'
|
||||
const result = validateAuthMethod(AuthType.USE_OPENAI, mockConfig);
|
||||
expect(result).toBeNull();
|
||||
expect(mockConfig.modelsConfig.getModel).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should fail validation when Config provides different model without matching env key', () => {
|
||||
// Clean up any existing env keys first
|
||||
delete process.env['CLI_API_KEY'];
|
||||
delete process.env['SETTINGS_API_KEY'];
|
||||
delete process.env['OPENAI_API_KEY'];
|
||||
|
||||
vi.mocked(settings.loadSettings).mockReturnValue({
|
||||
merged: {
|
||||
model: { name: 'settings-model' },
|
||||
modelProviders: {
|
||||
openai: [
|
||||
{ id: 'settings-model', envKey: 'SETTINGS_API_KEY' },
|
||||
{ id: 'cli-model', envKey: 'CLI_API_KEY' },
|
||||
],
|
||||
},
|
||||
},
|
||||
} as unknown as ReturnType<typeof settings.loadSettings>);
|
||||
|
||||
const mockConfig = {
|
||||
modelsConfig: {
|
||||
getModel: vi.fn().mockReturnValue('cli-model'),
|
||||
},
|
||||
} as unknown as import('@qwen-code/qwen-code-core').Config;
|
||||
|
||||
// Don't set CLI_API_KEY - validation should fail
|
||||
const result = validateAuthMethod(AuthType.USE_OPENAI, mockConfig);
|
||||
expect(result).not.toBeNull();
|
||||
expect(result).toContain('CLI_API_KEY');
|
||||
});
|
||||
});
|
||||
|
||||
@@ -1,21 +1,169 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Google LLC
|
||||
* Copyright 2025 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { AuthType } from '@qwen-code/qwen-code-core';
|
||||
import { loadEnvironment, loadSettings } from './settings.js';
|
||||
import {
|
||||
AuthType,
|
||||
type Config,
|
||||
type ModelProvidersConfig,
|
||||
type ProviderModelConfig,
|
||||
} from '@qwen-code/qwen-code-core';
|
||||
import { loadEnvironment, loadSettings, type Settings } from './settings.js';
|
||||
import { t } from '../i18n/index.js';
|
||||
|
||||
export function validateAuthMethod(authMethod: string): string | null {
|
||||
/**
|
||||
* Default environment variable names for each auth type
|
||||
*/
|
||||
const DEFAULT_ENV_KEYS: Record<string, string> = {
|
||||
[AuthType.USE_OPENAI]: 'OPENAI_API_KEY',
|
||||
[AuthType.USE_ANTHROPIC]: 'ANTHROPIC_API_KEY',
|
||||
[AuthType.USE_GEMINI]: 'GEMINI_API_KEY',
|
||||
[AuthType.USE_VERTEX_AI]: 'GOOGLE_API_KEY',
|
||||
};
|
||||
|
||||
/**
|
||||
* Find model configuration from modelProviders by authType and modelId
|
||||
*/
|
||||
function findModelConfig(
|
||||
modelProviders: ModelProvidersConfig | undefined,
|
||||
authType: string,
|
||||
modelId: string | undefined,
|
||||
): ProviderModelConfig | undefined {
|
||||
if (!modelProviders || !modelId) {
|
||||
return undefined;
|
||||
}
|
||||
|
||||
const models = modelProviders[authType];
|
||||
if (!Array.isArray(models)) {
|
||||
return undefined;
|
||||
}
|
||||
|
||||
return models.find((m) => m.id === modelId);
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if API key is available for the given auth type and model configuration.
|
||||
* Prioritizes custom envKey from modelProviders over default environment variables.
|
||||
*/
|
||||
function hasApiKeyForAuth(
|
||||
authType: string,
|
||||
settings: Settings,
|
||||
config?: Config,
|
||||
): {
|
||||
hasKey: boolean;
|
||||
checkedEnvKey: string | undefined;
|
||||
isExplicitEnvKey: boolean;
|
||||
} {
|
||||
const modelProviders = settings.modelProviders as
|
||||
| ModelProvidersConfig
|
||||
| undefined;
|
||||
|
||||
// Use config.modelsConfig.getModel() if available for accurate model ID resolution
|
||||
// that accounts for CLI args, env vars, and settings. Fall back to settings.model.name.
|
||||
const modelId = config?.modelsConfig.getModel() ?? settings.model?.name;
|
||||
|
||||
// Try to find model-specific envKey from modelProviders
|
||||
const modelConfig = findModelConfig(modelProviders, authType, modelId);
|
||||
if (modelConfig?.envKey) {
|
||||
// Explicit envKey configured - only check this env var, no apiKey fallback
|
||||
const hasKey = !!process.env[modelConfig.envKey];
|
||||
return {
|
||||
hasKey,
|
||||
checkedEnvKey: modelConfig.envKey,
|
||||
isExplicitEnvKey: true,
|
||||
};
|
||||
}
|
||||
|
||||
// Using default environment variable - apiKey fallback is allowed
|
||||
const defaultEnvKey = DEFAULT_ENV_KEYS[authType];
|
||||
if (defaultEnvKey) {
|
||||
const hasKey = !!process.env[defaultEnvKey];
|
||||
if (hasKey) {
|
||||
return { hasKey, checkedEnvKey: defaultEnvKey, isExplicitEnvKey: false };
|
||||
}
|
||||
}
|
||||
|
||||
// Also check settings.security.auth.apiKey as fallback (only for default env key)
|
||||
if (settings.security?.auth?.apiKey) {
|
||||
return {
|
||||
hasKey: true,
|
||||
checkedEnvKey: defaultEnvKey || undefined,
|
||||
isExplicitEnvKey: false,
|
||||
};
|
||||
}
|
||||
|
||||
return {
|
||||
hasKey: false,
|
||||
checkedEnvKey: defaultEnvKey,
|
||||
isExplicitEnvKey: false,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate API key error message based on auth check result.
|
||||
* Returns null if API key is present, otherwise returns the appropriate error message.
|
||||
*/
|
||||
function getApiKeyError(
|
||||
authMethod: string,
|
||||
settings: Settings,
|
||||
config?: Config,
|
||||
): string | null {
|
||||
const { hasKey, checkedEnvKey, isExplicitEnvKey } = hasApiKeyForAuth(
|
||||
authMethod,
|
||||
settings,
|
||||
config,
|
||||
);
|
||||
if (hasKey) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const envKeyHint = checkedEnvKey || DEFAULT_ENV_KEYS[authMethod];
|
||||
if (isExplicitEnvKey) {
|
||||
return t(
|
||||
'{{envKeyHint}} environment variable not found. Please set it in your .env file or environment variables.',
|
||||
{ envKeyHint },
|
||||
);
|
||||
}
|
||||
return t(
|
||||
'{{envKeyHint}} environment variable not found (or set settings.security.auth.apiKey). Please set it in your .env file or environment variables.',
|
||||
{ envKeyHint },
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate that the required credentials and configuration exist for the given auth method.
|
||||
*/
|
||||
export function validateAuthMethod(
|
||||
authMethod: string,
|
||||
config?: Config,
|
||||
): string | null {
|
||||
const settings = loadSettings();
|
||||
loadEnvironment(settings.merged);
|
||||
|
||||
if (authMethod === AuthType.USE_OPENAI) {
|
||||
const hasApiKey =
|
||||
process.env['OPENAI_API_KEY'] || settings.merged.security?.auth?.apiKey;
|
||||
if (!hasApiKey) {
|
||||
return 'OPENAI_API_KEY environment variable not found. You can enter it interactively or add it to your .env file.';
|
||||
const { hasKey, checkedEnvKey, isExplicitEnvKey } = hasApiKeyForAuth(
|
||||
authMethod,
|
||||
settings.merged,
|
||||
config,
|
||||
);
|
||||
if (!hasKey) {
|
||||
const envKeyHint = checkedEnvKey
|
||||
? `'${checkedEnvKey}'`
|
||||
: "'OPENAI_API_KEY'";
|
||||
if (isExplicitEnvKey) {
|
||||
// Explicit envKey configured - only suggest setting the env var
|
||||
return t(
|
||||
'Missing API key for OpenAI-compatible auth. Set the {{envKeyHint}} environment variable.',
|
||||
{ envKeyHint },
|
||||
);
|
||||
}
|
||||
// Default env key - can use either apiKey or env var
|
||||
return t(
|
||||
'Missing API key for OpenAI-compatible auth. Set settings.security.auth.apiKey, or set the {{envKeyHint}} environment variable.',
|
||||
{ envKeyHint },
|
||||
);
|
||||
}
|
||||
return null;
|
||||
}
|
||||
@@ -27,36 +175,49 @@ export function validateAuthMethod(authMethod: string): string | null {
|
||||
}
|
||||
|
||||
if (authMethod === AuthType.USE_ANTHROPIC) {
|
||||
const hasApiKey = process.env['ANTHROPIC_API_KEY'];
|
||||
if (!hasApiKey) {
|
||||
return 'ANTHROPIC_API_KEY environment variable not found.';
|
||||
const apiKeyError = getApiKeyError(authMethod, settings.merged, config);
|
||||
if (apiKeyError) {
|
||||
return apiKeyError;
|
||||
}
|
||||
|
||||
const hasBaseUrl = process.env['ANTHROPIC_BASE_URL'];
|
||||
if (!hasBaseUrl) {
|
||||
return 'ANTHROPIC_BASE_URL environment variable not found.';
|
||||
// Check baseUrl - can come from modelProviders or environment
|
||||
const modelProviders = settings.merged.modelProviders as
|
||||
| ModelProvidersConfig
|
||||
| undefined;
|
||||
// Use config.modelsConfig.getModel() if available for accurate model ID
|
||||
const modelId =
|
||||
config?.modelsConfig.getModel() ?? settings.merged.model?.name;
|
||||
const modelConfig = findModelConfig(modelProviders, authMethod, modelId);
|
||||
|
||||
if (modelConfig && !modelConfig.baseUrl) {
|
||||
return t(
|
||||
'Anthropic provider missing required baseUrl in modelProviders[].baseUrl.',
|
||||
);
|
||||
}
|
||||
if (!modelConfig && !process.env['ANTHROPIC_BASE_URL']) {
|
||||
return t('ANTHROPIC_BASE_URL environment variable not found.');
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
if (authMethod === AuthType.USE_GEMINI) {
|
||||
const hasApiKey = process.env['GEMINI_API_KEY'];
|
||||
if (!hasApiKey) {
|
||||
return 'GEMINI_API_KEY environment variable not found. Please set it in your .env file or environment variables.';
|
||||
const apiKeyError = getApiKeyError(authMethod, settings.merged, config);
|
||||
if (apiKeyError) {
|
||||
return apiKeyError;
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
if (authMethod === AuthType.USE_VERTEX_AI) {
|
||||
const hasApiKey = process.env['GOOGLE_API_KEY'];
|
||||
if (!hasApiKey) {
|
||||
return 'GOOGLE_API_KEY environment variable not found. Please set it in your .env file or environment variables.';
|
||||
const apiKeyError = getApiKeyError(authMethod, settings.merged, config);
|
||||
if (apiKeyError) {
|
||||
return apiKeyError;
|
||||
}
|
||||
|
||||
process.env['GOOGLE_GENAI_USE_VERTEXAI'] = 'true';
|
||||
return null;
|
||||
}
|
||||
|
||||
return 'Invalid auth method selected.';
|
||||
return t('Invalid auth method selected.');
|
||||
}
|
||||
|
||||
@@ -77,10 +77,8 @@ vi.mock('read-package-up', () => ({
|
||||
),
|
||||
}));
|
||||
|
||||
vi.mock('@qwen-code/qwen-code-core', async () => {
|
||||
const actualServer = await vi.importActual<typeof ServerConfig>(
|
||||
'@qwen-code/qwen-code-core',
|
||||
);
|
||||
vi.mock('@qwen-code/qwen-code-core', async (importOriginal) => {
|
||||
const actualServer = await importOriginal<typeof ServerConfig>();
|
||||
return {
|
||||
...actualServer,
|
||||
IdeClient: {
|
||||
|
||||
@@ -31,6 +31,10 @@ import {
|
||||
} from '@qwen-code/qwen-code-core';
|
||||
import { extensionsCommand } from '../commands/extensions.js';
|
||||
import type { Settings } from './settings.js';
|
||||
import {
|
||||
resolveCliGenerationConfig,
|
||||
getAuthTypeFromEnv,
|
||||
} from '../utils/modelConfigUtils.js';
|
||||
import yargs, { type Argv } from 'yargs';
|
||||
import { hideBin } from 'yargs/helpers';
|
||||
import * as fs from 'node:fs';
|
||||
@@ -924,28 +928,25 @@ export async function loadCliConfig(
|
||||
|
||||
const selectedAuthType =
|
||||
(argv.authType as AuthType | undefined) ||
|
||||
settings.security?.auth?.selectedType;
|
||||
settings.security?.auth?.selectedType ||
|
||||
/* getAuthTypeFromEnv means no authType was explicitly provided, we infer the authType from env vars */
|
||||
getAuthTypeFromEnv();
|
||||
|
||||
const apiKey =
|
||||
(selectedAuthType === AuthType.USE_OPENAI
|
||||
? argv.openaiApiKey ||
|
||||
process.env['OPENAI_API_KEY'] ||
|
||||
settings.security?.auth?.apiKey
|
||||
: '') || '';
|
||||
const baseUrl =
|
||||
(selectedAuthType === AuthType.USE_OPENAI
|
||||
? argv.openaiBaseUrl ||
|
||||
process.env['OPENAI_BASE_URL'] ||
|
||||
settings.security?.auth?.baseUrl
|
||||
: '') || '';
|
||||
const resolvedModel =
|
||||
argv.model ||
|
||||
(selectedAuthType === AuthType.USE_OPENAI
|
||||
? process.env['OPENAI_MODEL'] ||
|
||||
process.env['QWEN_MODEL'] ||
|
||||
settings.model?.name
|
||||
: '') ||
|
||||
'';
|
||||
// Unified resolution of generation config with source attribution
|
||||
const resolvedCliConfig = resolveCliGenerationConfig({
|
||||
argv: {
|
||||
model: argv.model,
|
||||
openaiApiKey: argv.openaiApiKey,
|
||||
openaiBaseUrl: argv.openaiBaseUrl,
|
||||
openaiLogging: argv.openaiLogging,
|
||||
openaiLoggingDir: argv.openaiLoggingDir,
|
||||
},
|
||||
settings,
|
||||
selectedAuthType,
|
||||
env: process.env as Record<string, string | undefined>,
|
||||
});
|
||||
|
||||
const { model: resolvedModel } = resolvedCliConfig;
|
||||
|
||||
const sandboxConfig = await loadSandboxConfig(settings, argv);
|
||||
const screenReader =
|
||||
@@ -979,6 +980,8 @@ export async function loadCliConfig(
|
||||
}
|
||||
}
|
||||
|
||||
const modelProvidersConfig = settings.modelProviders;
|
||||
|
||||
return new Config({
|
||||
sessionId,
|
||||
sessionData,
|
||||
@@ -1036,24 +1039,11 @@ export async function loadCliConfig(
|
||||
inputFormat,
|
||||
outputFormat,
|
||||
includePartialMessages,
|
||||
generationConfig: {
|
||||
...(settings.model?.generationConfig || {}),
|
||||
model: resolvedModel,
|
||||
apiKey,
|
||||
baseUrl,
|
||||
enableOpenAILogging:
|
||||
(typeof argv.openaiLogging === 'undefined'
|
||||
? settings.model?.enableOpenAILogging
|
||||
: argv.openaiLogging) ?? false,
|
||||
openAILoggingDir:
|
||||
argv.openaiLoggingDir || settings.model?.openAILoggingDir,
|
||||
},
|
||||
modelProvidersConfig,
|
||||
generationConfigSources: resolvedCliConfig.sources,
|
||||
generationConfig: resolvedCliConfig.generationConfig,
|
||||
cliVersion: await getCliVersion(),
|
||||
webSearch: buildWebSearchConfig(
|
||||
argv,
|
||||
settings,
|
||||
settings.security?.auth?.selectedType,
|
||||
),
|
||||
webSearch: buildWebSearchConfig(argv, settings, selectedAuthType),
|
||||
summarizeToolOutput: settings.model?.summarizeToolOutput,
|
||||
ideMode,
|
||||
chatCompression: settings.model?.chatCompression,
|
||||
|
||||
87
packages/cli/src/config/modelProvidersScope.test.ts
Normal file
87
packages/cli/src/config/modelProvidersScope.test.ts
Normal file
@@ -0,0 +1,87 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { describe, expect, it } from 'vitest';
|
||||
import { SettingScope } from './settings.js';
|
||||
import { getPersistScopeForModelSelection } from './modelProvidersScope.js';
|
||||
|
||||
function makeSettings({
|
||||
isTrusted,
|
||||
userModelProviders,
|
||||
workspaceModelProviders,
|
||||
}: {
|
||||
isTrusted: boolean;
|
||||
userModelProviders?: unknown;
|
||||
workspaceModelProviders?: unknown;
|
||||
}) {
|
||||
const userSettings: Record<string, unknown> = {};
|
||||
const workspaceSettings: Record<string, unknown> = {};
|
||||
|
||||
// When undefined, treat as "not present in this scope" (the key is omitted),
|
||||
// matching how LoadedSettings is shaped when a settings file doesn't define it.
|
||||
if (userModelProviders !== undefined) {
|
||||
userSettings['modelProviders'] = userModelProviders;
|
||||
}
|
||||
if (workspaceModelProviders !== undefined) {
|
||||
workspaceSettings['modelProviders'] = workspaceModelProviders;
|
||||
}
|
||||
|
||||
return {
|
||||
isTrusted,
|
||||
user: { settings: userSettings },
|
||||
workspace: { settings: workspaceSettings },
|
||||
} as unknown as import('./settings.js').LoadedSettings;
|
||||
}
|
||||
|
||||
describe('getPersistScopeForModelSelection', () => {
|
||||
it('prefers workspace when trusted and workspace defines modelProviders', () => {
|
||||
const settings = makeSettings({
|
||||
isTrusted: true,
|
||||
workspaceModelProviders: {},
|
||||
userModelProviders: { anything: true },
|
||||
});
|
||||
|
||||
expect(getPersistScopeForModelSelection(settings)).toBe(
|
||||
SettingScope.Workspace,
|
||||
);
|
||||
});
|
||||
|
||||
it('falls back to user when workspace does not define modelProviders', () => {
|
||||
const settings = makeSettings({
|
||||
isTrusted: true,
|
||||
workspaceModelProviders: undefined,
|
||||
userModelProviders: {},
|
||||
});
|
||||
|
||||
expect(getPersistScopeForModelSelection(settings)).toBe(SettingScope.User);
|
||||
});
|
||||
|
||||
it('ignores workspace modelProviders when workspace is untrusted', () => {
|
||||
const settings = makeSettings({
|
||||
isTrusted: false,
|
||||
workspaceModelProviders: {},
|
||||
userModelProviders: undefined,
|
||||
});
|
||||
|
||||
expect(getPersistScopeForModelSelection(settings)).toBe(SettingScope.User);
|
||||
});
|
||||
|
||||
it('falls back to legacy trust heuristic when neither scope defines modelProviders', () => {
|
||||
const trusted = makeSettings({
|
||||
isTrusted: true,
|
||||
userModelProviders: undefined,
|
||||
workspaceModelProviders: undefined,
|
||||
});
|
||||
expect(getPersistScopeForModelSelection(trusted)).toBe(SettingScope.User);
|
||||
|
||||
const untrusted = makeSettings({
|
||||
isTrusted: false,
|
||||
userModelProviders: undefined,
|
||||
workspaceModelProviders: undefined,
|
||||
});
|
||||
expect(getPersistScopeForModelSelection(untrusted)).toBe(SettingScope.User);
|
||||
});
|
||||
});
|
||||
48
packages/cli/src/config/modelProvidersScope.ts
Normal file
48
packages/cli/src/config/modelProvidersScope.ts
Normal file
@@ -0,0 +1,48 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { SettingScope, type LoadedSettings } from './settings.js';
|
||||
|
||||
function hasOwnModelProviders(settingsObj: unknown): boolean {
|
||||
if (!settingsObj || typeof settingsObj !== 'object') {
|
||||
return false;
|
||||
}
|
||||
const obj = settingsObj as Record<string, unknown>;
|
||||
// Treat an explicitly configured empty object (modelProviders: {}) as "owned"
|
||||
// by this scope, which is important when mergeStrategy is REPLACE.
|
||||
return Object.prototype.hasOwnProperty.call(obj, 'modelProviders');
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns which writable scope (Workspace/User) owns the effective modelProviders
|
||||
* configuration.
|
||||
*
|
||||
* Note: Workspace scope is only considered when the workspace is trusted.
|
||||
*/
|
||||
export function getModelProvidersOwnerScope(
|
||||
settings: LoadedSettings,
|
||||
): SettingScope | undefined {
|
||||
if (settings.isTrusted && hasOwnModelProviders(settings.workspace.settings)) {
|
||||
return SettingScope.Workspace;
|
||||
}
|
||||
|
||||
if (hasOwnModelProviders(settings.user.settings)) {
|
||||
return SettingScope.User;
|
||||
}
|
||||
|
||||
return undefined;
|
||||
}
|
||||
|
||||
/**
|
||||
* Choose the settings scope to persist a model selection.
|
||||
* Prefer persisting back to the scope that contains the effective modelProviders
|
||||
* config, otherwise fall back to the legacy trust-based heuristic.
|
||||
*/
|
||||
export function getPersistScopeForModelSelection(
|
||||
settings: LoadedSettings,
|
||||
): SettingScope {
|
||||
return getModelProvidersOwnerScope(settings) ?? SettingScope.User;
|
||||
}
|
||||
@@ -10,6 +10,7 @@ import type {
|
||||
TelemetrySettings,
|
||||
AuthType,
|
||||
ChatCompressionSettings,
|
||||
ModelProvidersConfig,
|
||||
} from '@qwen-code/qwen-code-core';
|
||||
import {
|
||||
ApprovalMode,
|
||||
@@ -102,6 +103,19 @@ const SETTINGS_SCHEMA = {
|
||||
mergeStrategy: MergeStrategy.SHALLOW_MERGE,
|
||||
},
|
||||
|
||||
// Model providers configuration grouped by authType
|
||||
modelProviders: {
|
||||
type: 'object',
|
||||
label: 'Model Providers',
|
||||
category: 'Model',
|
||||
requiresRestart: false,
|
||||
default: {} as ModelProvidersConfig,
|
||||
description:
|
||||
'Model providers configuration grouped by authType. Each authType contains an array of model configurations.',
|
||||
showInDialog: false,
|
||||
mergeStrategy: MergeStrategy.REPLACE,
|
||||
},
|
||||
|
||||
general: {
|
||||
type: 'object',
|
||||
label: 'General',
|
||||
|
||||
@@ -45,7 +45,9 @@ export async function initializeApp(
|
||||
// Auto-detect and set LLM output language on first use
|
||||
initializeLlmOutputLanguage();
|
||||
|
||||
const authType = settings.merged.security?.auth?.selectedType;
|
||||
// Use authType from modelsConfig which respects CLI --auth-type argument
|
||||
// over settings.security.auth.selectedType
|
||||
const authType = config.modelsConfig.getCurrentAuthType();
|
||||
const authError = await performInitialAuth(config, authType);
|
||||
|
||||
// Fallback to user select when initial authentication fails
|
||||
@@ -59,7 +61,7 @@ export async function initializeApp(
|
||||
const themeError = validateTheme(settings);
|
||||
|
||||
const shouldOpenAuthDialog =
|
||||
settings.merged.security?.auth?.selectedType === undefined || !!authError;
|
||||
!config.modelsConfig.wasAuthTypeExplicitlyProvided() || !!authError;
|
||||
|
||||
if (config.getIdeMode()) {
|
||||
const ideClient = await IdeClient.getInstance();
|
||||
|
||||
@@ -87,6 +87,15 @@ vi.mock('./config/sandboxConfig.js', () => ({
|
||||
loadSandboxConfig: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('./core/initializer.js', () => ({
|
||||
initializeApp: vi.fn().mockResolvedValue({
|
||||
authError: null,
|
||||
themeError: null,
|
||||
shouldOpenAuthDialog: false,
|
||||
geminiMdFileCount: 0,
|
||||
}),
|
||||
}));
|
||||
|
||||
describe('gemini.tsx main function', () => {
|
||||
let originalEnvGeminiSandbox: string | undefined;
|
||||
let originalEnvSandbox: string | undefined;
|
||||
@@ -362,7 +371,6 @@ describe('gemini.tsx main function', () => {
|
||||
expect(inputArg).toBe('hello stream');
|
||||
|
||||
expect(validateAuthSpy).toHaveBeenCalledWith(
|
||||
undefined,
|
||||
undefined,
|
||||
configStub,
|
||||
expect.any(Object),
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import type { Config, AuthType } from '@qwen-code/qwen-code-core';
|
||||
import type { Config } from '@qwen-code/qwen-code-core';
|
||||
import { InputFormat, logUserPrompt } from '@qwen-code/qwen-code-core';
|
||||
import { render } from 'ink';
|
||||
import dns from 'node:dns';
|
||||
@@ -252,22 +252,20 @@ export async function main() {
|
||||
argv,
|
||||
);
|
||||
|
||||
if (
|
||||
settings.merged.security?.auth?.selectedType &&
|
||||
!settings.merged.security?.auth?.useExternal
|
||||
) {
|
||||
if (!settings.merged.security?.auth?.useExternal) {
|
||||
// Validate authentication here because the sandbox will interfere with the Oauth2 web redirect.
|
||||
try {
|
||||
const err = validateAuthMethod(
|
||||
settings.merged.security.auth.selectedType,
|
||||
);
|
||||
if (err) {
|
||||
throw new Error(err);
|
||||
}
|
||||
const authType = partialConfig.modelsConfig.getCurrentAuthType();
|
||||
// Fresh users may not have selected/persisted an authType yet.
|
||||
// In that case, defer auth prompting/selection to the main interactive flow.
|
||||
if (authType) {
|
||||
const err = validateAuthMethod(authType, partialConfig);
|
||||
if (err) {
|
||||
throw new Error(err);
|
||||
}
|
||||
|
||||
await partialConfig.refreshAuth(
|
||||
settings.merged.security.auth.selectedType,
|
||||
);
|
||||
await partialConfig.refreshAuth(authType);
|
||||
}
|
||||
} catch (err) {
|
||||
console.error('Error authenticating:', err);
|
||||
process.exit(1);
|
||||
@@ -440,8 +438,6 @@ export async function main() {
|
||||
}
|
||||
|
||||
const nonInteractiveConfig = await validateNonInteractiveAuth(
|
||||
(argv.authType as AuthType) ||
|
||||
settings.merged.security?.auth?.selectedType,
|
||||
settings.merged.security?.auth?.useExternal,
|
||||
config,
|
||||
settings,
|
||||
|
||||
@@ -45,7 +45,8 @@ export default {
|
||||
'Initializing...': 'Initialisierung...',
|
||||
'Connecting to MCP servers... ({{connected}}/{{total}})':
|
||||
'Verbindung zu MCP-Servern wird hergestellt... ({{connected}}/{{total}})',
|
||||
'Type your message or @path/to/file': 'Nachricht eingeben oder @Pfad/zur/Datei',
|
||||
'Type your message or @path/to/file':
|
||||
'Nachricht eingeben oder @Pfad/zur/Datei',
|
||||
"Press 'i' for INSERT mode and 'Esc' for NORMAL mode.":
|
||||
"Drücken Sie 'i' für den EINFÜGE-Modus und 'Esc' für den NORMAL-Modus.",
|
||||
'Cancel operation / Clear input (double press)':
|
||||
@@ -89,7 +90,8 @@ export default {
|
||||
'No tools available': 'Keine Werkzeuge verfügbar',
|
||||
'View or change the approval mode for tool usage':
|
||||
'Genehmigungsmodus für Werkzeugnutzung anzeigen oder ändern',
|
||||
'View or change the language setting': 'Spracheinstellung anzeigen oder ändern',
|
||||
'View or change the language setting':
|
||||
'Spracheinstellung anzeigen oder ändern',
|
||||
'change the theme': 'Design ändern',
|
||||
'Select Theme': 'Design auswählen',
|
||||
Preview: 'Vorschau',
|
||||
@@ -213,14 +215,16 @@ export default {
|
||||
'All Tools': 'Alle Werkzeuge',
|
||||
'Read-only Tools': 'Nur-Lese-Werkzeuge',
|
||||
'Read & Edit Tools': 'Lese- und Bearbeitungswerkzeuge',
|
||||
'Read & Edit & Execution Tools': 'Lese-, Bearbeitungs- und Ausführungswerkzeuge',
|
||||
'Read & Edit & Execution Tools':
|
||||
'Lese-, Bearbeitungs- und Ausführungswerkzeuge',
|
||||
'All tools selected, including MCP tools':
|
||||
'Alle Werkzeuge ausgewählt, einschließlich MCP-Werkzeuge',
|
||||
'Selected tools:': 'Ausgewählte Werkzeuge:',
|
||||
'Read-only tools:': 'Nur-Lese-Werkzeuge:',
|
||||
'Edit tools:': 'Bearbeitungswerkzeuge:',
|
||||
'Execution tools:': 'Ausführungswerkzeuge:',
|
||||
'Step {{n}}: Choose Background Color': 'Schritt {{n}}: Hintergrundfarbe wählen',
|
||||
'Step {{n}}: Choose Background Color':
|
||||
'Schritt {{n}}: Hintergrundfarbe wählen',
|
||||
'Step {{n}}: Confirm and Save': 'Schritt {{n}}: Bestätigen und Speichern',
|
||||
// Agents - Navigation & Instructions
|
||||
'Esc to cancel': 'Esc zum Abbrechen',
|
||||
@@ -245,14 +249,16 @@ export default {
|
||||
'e.g., Reviews code for best practices and potential bugs.':
|
||||
'z.B. Überprüft Code auf Best Practices und mögliche Fehler.',
|
||||
'Description cannot be empty.': 'Beschreibung darf nicht leer sein.',
|
||||
'Failed to launch editor: {{error}}': 'Fehler beim Starten des Editors: {{error}}',
|
||||
'Failed to launch editor: {{error}}':
|
||||
'Fehler beim Starten des Editors: {{error}}',
|
||||
'Failed to save and edit subagent: {{error}}':
|
||||
'Fehler beim Speichern und Bearbeiten des Unteragenten: {{error}}',
|
||||
|
||||
// ============================================================================
|
||||
// Commands - General (continued)
|
||||
// ============================================================================
|
||||
'View and edit Qwen Code settings': 'Qwen Code Einstellungen anzeigen und bearbeiten',
|
||||
'View and edit Qwen Code settings':
|
||||
'Qwen Code Einstellungen anzeigen und bearbeiten',
|
||||
Settings: 'Einstellungen',
|
||||
'(Use Enter to select{{tabText}})': '(Enter zum Auswählen{{tabText}})',
|
||||
', Tab to change focus': ', Tab zum Fokuswechsel',
|
||||
@@ -308,7 +314,8 @@ export default {
|
||||
'Use Ripgrep': 'Ripgrep verwenden',
|
||||
'Use Builtin Ripgrep': 'Integriertes Ripgrep verwenden',
|
||||
'Enable Tool Output Truncation': 'Werkzeugausgabe-Kürzung aktivieren',
|
||||
'Tool Output Truncation Threshold': 'Schwellenwert für Werkzeugausgabe-Kürzung',
|
||||
'Tool Output Truncation Threshold':
|
||||
'Schwellenwert für Werkzeugausgabe-Kürzung',
|
||||
'Tool Output Truncation Lines': 'Zeilen für Werkzeugausgabe-Kürzung',
|
||||
'Folder Trust': 'Ordnervertrauen',
|
||||
'Vision Model Preview': 'Vision-Modell-Vorschau',
|
||||
@@ -364,7 +371,8 @@ export default {
|
||||
'Failed to parse {{terminalName}} keybindings.json. The file contains invalid JSON. Please fix the file manually or delete it to allow automatic configuration.':
|
||||
'Fehler beim Parsen von {{terminalName}} keybindings.json. Die Datei enthält ungültiges JSON. Bitte korrigieren Sie die Datei manuell oder löschen Sie sie, um automatische Konfiguration zu ermöglichen.',
|
||||
'Error: {{error}}': 'Fehler: {{error}}',
|
||||
'Shift+Enter binding already exists': 'Umschalt+Enter-Belegung existiert bereits',
|
||||
'Shift+Enter binding already exists':
|
||||
'Umschalt+Enter-Belegung existiert bereits',
|
||||
'Ctrl+Enter binding already exists': 'Strg+Enter-Belegung existiert bereits',
|
||||
'Existing keybindings detected. Will not modify to avoid conflicts.':
|
||||
'Bestehende Tastenbelegungen erkannt. Keine Änderungen, um Konflikte zu vermeiden.',
|
||||
@@ -398,7 +406,8 @@ export default {
|
||||
'Set UI language': 'UI-Sprache festlegen',
|
||||
'Set LLM output language': 'LLM-Ausgabesprache festlegen',
|
||||
'Usage: /language ui [zh-CN|en-US]': 'Verwendung: /language ui [zh-CN|en-US]',
|
||||
'Usage: /language output <language>': 'Verwendung: /language output <Sprache>',
|
||||
'Usage: /language output <language>':
|
||||
'Verwendung: /language output <Sprache>',
|
||||
'Example: /language output 中文': 'Beispiel: /language output Deutsch',
|
||||
'Example: /language output English': 'Beispiel: /language output English',
|
||||
'Example: /language output 日本語': 'Beispiel: /language output Japanisch',
|
||||
@@ -419,7 +428,8 @@ export default {
|
||||
' - en-US: English': ' - en-US: Englisch',
|
||||
'Set UI language to Simplified Chinese (zh-CN)':
|
||||
'UI-Sprache auf Vereinfachtes Chinesisch (zh-CN) setzen',
|
||||
'Set UI language to English (en-US)': 'UI-Sprache auf Englisch (en-US) setzen',
|
||||
'Set UI language to English (en-US)':
|
||||
'UI-Sprache auf Englisch (en-US) setzen',
|
||||
|
||||
// ============================================================================
|
||||
// Commands - Approval Mode
|
||||
@@ -427,7 +437,8 @@ export default {
|
||||
'Approval Mode': 'Genehmigungsmodus',
|
||||
'Current approval mode: {{mode}}': 'Aktueller Genehmigungsmodus: {{mode}}',
|
||||
'Available approval modes:': 'Verfügbare Genehmigungsmodi:',
|
||||
'Approval mode changed to: {{mode}}': 'Genehmigungsmodus geändert zu: {{mode}}',
|
||||
'Approval mode changed to: {{mode}}':
|
||||
'Genehmigungsmodus geändert zu: {{mode}}',
|
||||
'Approval mode changed to: {{mode}} (saved to {{scope}} settings{{location}})':
|
||||
'Genehmigungsmodus geändert zu: {{mode}} (gespeichert in {{scope}} Einstellungen{{location}})',
|
||||
'Usage: /approval-mode <mode> [--session|--user|--project]':
|
||||
@@ -452,14 +463,16 @@ export default {
|
||||
'Fehler beim Ändern des Genehmigungsmodus: {{error}}',
|
||||
'Apply to current session only (temporary)':
|
||||
'Nur auf aktuelle Sitzung anwenden (temporär)',
|
||||
'Persist for this project/workspace': 'Für dieses Projekt/Arbeitsbereich speichern',
|
||||
'Persist for this project/workspace':
|
||||
'Für dieses Projekt/Arbeitsbereich speichern',
|
||||
'Persist for this user on this machine':
|
||||
'Für diesen Benutzer auf diesem Computer speichern',
|
||||
'Analyze only, do not modify files or execute commands':
|
||||
'Nur analysieren, keine Dateien ändern oder Befehle ausführen',
|
||||
'Require approval for file edits or shell commands':
|
||||
'Genehmigung für Dateibearbeitungen oder Shell-Befehle erforderlich',
|
||||
'Automatically approve file edits': 'Dateibearbeitungen automatisch genehmigen',
|
||||
'Automatically approve file edits':
|
||||
'Dateibearbeitungen automatisch genehmigen',
|
||||
'Automatically approve all tools': 'Alle Werkzeuge automatisch genehmigen',
|
||||
'Workspace approval mode exists and takes priority. User-level change will have no effect.':
|
||||
'Arbeitsbereich-Genehmigungsmodus existiert und hat Vorrang. Benutzerebene-Änderung hat keine Wirkung.',
|
||||
@@ -475,12 +488,14 @@ export default {
|
||||
'Commands for interacting with memory.':
|
||||
'Befehle für die Interaktion mit dem Speicher.',
|
||||
'Show the current memory contents.': 'Aktuellen Speicherinhalt anzeigen.',
|
||||
'Show project-level memory contents.': 'Projektebene-Speicherinhalt anzeigen.',
|
||||
'Show project-level memory contents.':
|
||||
'Projektebene-Speicherinhalt anzeigen.',
|
||||
'Show global memory contents.': 'Globalen Speicherinhalt anzeigen.',
|
||||
'Add content to project-level memory.':
|
||||
'Inhalt zum Projektebene-Speicher hinzufügen.',
|
||||
'Add content to global memory.': 'Inhalt zum globalen Speicher hinzufügen.',
|
||||
'Refresh the memory from the source.': 'Speicher aus der Quelle aktualisieren.',
|
||||
'Refresh the memory from the source.':
|
||||
'Speicher aus der Quelle aktualisieren.',
|
||||
'Usage: /memory add --project <text to remember>':
|
||||
'Verwendung: /memory add --project <zu merkender Text>',
|
||||
'Usage: /memory add --global <text to remember>':
|
||||
@@ -520,7 +535,8 @@ export default {
|
||||
'Konfigurierte MCP-Server und Werkzeuge auflisten',
|
||||
'Restarts MCP servers.': 'MCP-Server neu starten.',
|
||||
'Config not loaded.': 'Konfiguration nicht geladen.',
|
||||
'Could not retrieve tool registry.': 'Werkzeugregister konnte nicht abgerufen werden.',
|
||||
'Could not retrieve tool registry.':
|
||||
'Werkzeugregister konnte nicht abgerufen werden.',
|
||||
'No MCP servers configured with OAuth authentication.':
|
||||
'Keine MCP-Server mit OAuth-Authentifizierung konfiguriert.',
|
||||
'MCP servers with OAuth authentication:':
|
||||
@@ -539,7 +555,8 @@ export default {
|
||||
// Commands - Chat
|
||||
// ============================================================================
|
||||
'Manage conversation history.': 'Gesprächsverlauf verwalten.',
|
||||
'List saved conversation checkpoints': 'Gespeicherte Gesprächsprüfpunkte auflisten',
|
||||
'List saved conversation checkpoints':
|
||||
'Gespeicherte Gesprächsprüfpunkte auflisten',
|
||||
'No saved conversation checkpoints found.':
|
||||
'Keine gespeicherten Gesprächsprüfpunkte gefunden.',
|
||||
'List of saved conversations:': 'Liste gespeicherter Gespräche:',
|
||||
@@ -589,7 +606,8 @@ export default {
|
||||
'Kein Chat-Client verfügbar, um Zusammenfassung zu generieren.',
|
||||
'Already generating summary, wait for previous request to complete':
|
||||
'Zusammenfassung wird bereits generiert, warten Sie auf Abschluss der vorherigen Anfrage',
|
||||
'No conversation found to summarize.': 'Kein Gespräch zum Zusammenfassen gefunden.',
|
||||
'No conversation found to summarize.':
|
||||
'Kein Gespräch zum Zusammenfassen gefunden.',
|
||||
'Failed to generate project context summary: {{error}}':
|
||||
'Fehler beim Generieren der Projektkontextzusammenfassung: {{error}}',
|
||||
'Saved project summary to {{filePathForDisplay}}.':
|
||||
@@ -605,7 +623,8 @@ export default {
|
||||
'Switch the model for this session': 'Modell für diese Sitzung wechseln',
|
||||
'Content generator configuration not available.':
|
||||
'Inhaltsgenerator-Konfiguration nicht verfügbar.',
|
||||
'Authentication type not available.': 'Authentifizierungstyp nicht verfügbar.',
|
||||
'Authentication type not available.':
|
||||
'Authentifizierungstyp nicht verfügbar.',
|
||||
'No models available for the current authentication type ({{authType}}).':
|
||||
'Keine Modelle für den aktuellen Authentifizierungstyp ({{authType}}) verfügbar.',
|
||||
|
||||
@@ -622,7 +641,8 @@ export default {
|
||||
// ============================================================================
|
||||
'Already compressing, wait for previous request to complete':
|
||||
'Komprimierung läuft bereits, warten Sie auf Abschluss der vorherigen Anfrage',
|
||||
'Failed to compress chat history.': 'Fehler beim Komprimieren des Chatverlaufs.',
|
||||
'Failed to compress chat history.':
|
||||
'Fehler beim Komprimieren des Chatverlaufs.',
|
||||
'Failed to compress chat history: {{error}}':
|
||||
'Fehler beim Komprimieren des Chatverlaufs: {{error}}',
|
||||
'Compressing chat history': 'Chatverlauf wird komprimiert',
|
||||
@@ -644,10 +664,12 @@ export default {
|
||||
'Bitte geben Sie mindestens einen Pfad zum Hinzufügen an.',
|
||||
'The /directory add command is not supported in restrictive sandbox profiles. Please use --include-directories when starting the session instead.':
|
||||
'Der Befehl /directory add wird in restriktiven Sandbox-Profilen nicht unterstützt. Bitte verwenden Sie --include-directories beim Starten der Sitzung.',
|
||||
"Error adding '{{path}}': {{error}}": "Fehler beim Hinzufügen von '{{path}}': {{error}}",
|
||||
"Error adding '{{path}}': {{error}}":
|
||||
"Fehler beim Hinzufügen von '{{path}}': {{error}}",
|
||||
'Successfully added QWEN.md files from the following directories if there are:\n- {{directories}}':
|
||||
'QWEN.md-Dateien aus folgenden Verzeichnissen erfolgreich hinzugefügt, falls vorhanden:\n- {{directories}}',
|
||||
'Error refreshing memory: {{error}}': 'Fehler beim Aktualisieren des Speichers: {{error}}',
|
||||
'Error refreshing memory: {{error}}':
|
||||
'Fehler beim Aktualisieren des Speichers: {{error}}',
|
||||
'Successfully added directories:\n- {{directories}}':
|
||||
'Verzeichnisse erfolgreich hinzugefügt:\n- {{directories}}',
|
||||
'Current workspace directories:\n{{directories}}':
|
||||
@@ -677,7 +699,8 @@ export default {
|
||||
'Yes, allow always': 'Ja, immer erlauben',
|
||||
'Modify with external editor': 'Mit externem Editor bearbeiten',
|
||||
'No, suggest changes (esc)': 'Nein, Änderungen vorschlagen (Esc)',
|
||||
"Allow execution of: '{{command}}'?": "Ausführung erlauben von: '{{command}}'?",
|
||||
"Allow execution of: '{{command}}'?":
|
||||
"Ausführung erlauben von: '{{command}}'?",
|
||||
'Yes, allow always ...': 'Ja, immer erlauben ...',
|
||||
'Yes, and auto-accept edits': 'Ja, und Änderungen automatisch akzeptieren',
|
||||
'Yes, and manually approve edits': 'Ja, und Änderungen manuell genehmigen',
|
||||
@@ -749,12 +772,14 @@ export default {
|
||||
'Qwen OAuth authentication cancelled.':
|
||||
'Qwen OAuth-Authentifizierung abgebrochen.',
|
||||
'Qwen OAuth Authentication': 'Qwen OAuth-Authentifizierung',
|
||||
'Please visit this URL to authorize:': 'Bitte besuchen Sie diese URL zur Autorisierung:',
|
||||
'Please visit this URL to authorize:':
|
||||
'Bitte besuchen Sie diese URL zur Autorisierung:',
|
||||
'Or scan the QR code below:': 'Oder scannen Sie den QR-Code unten:',
|
||||
'Waiting for authorization': 'Warten auf Autorisierung',
|
||||
'Time remaining:': 'Verbleibende Zeit:',
|
||||
'(Press ESC or CTRL+C to cancel)': '(ESC oder STRG+C zum Abbrechen drücken)',
|
||||
'Qwen OAuth Authentication Timeout': 'Qwen OAuth-Authentifizierung abgelaufen',
|
||||
'Qwen OAuth Authentication Timeout':
|
||||
'Qwen OAuth-Authentifizierung abgelaufen',
|
||||
'OAuth token expired (over {{seconds}} seconds). Please select authentication method again.':
|
||||
'OAuth-Token abgelaufen (über {{seconds}} Sekunden). Bitte wählen Sie erneut eine Authentifizierungsmethode.',
|
||||
'Press any key to return to authentication type selection.':
|
||||
@@ -767,6 +792,22 @@ export default {
|
||||
'Authentifizierung abgelaufen. Bitte versuchen Sie es erneut.',
|
||||
'Waiting for auth... (Press ESC or CTRL+C to cancel)':
|
||||
'Warten auf Authentifizierung... (ESC oder STRG+C zum Abbrechen drücken)',
|
||||
'Missing API key for OpenAI-compatible auth. Set settings.security.auth.apiKey, or set the {{envKeyHint}} environment variable.':
|
||||
'API-Schlüssel für OpenAI-kompatible Authentifizierung fehlt. Setzen Sie settings.security.auth.apiKey oder die Umgebungsvariable {{envKeyHint}}.',
|
||||
'{{envKeyHint}} environment variable not found.':
|
||||
'Umgebungsvariable {{envKeyHint}} wurde nicht gefunden.',
|
||||
'{{envKeyHint}} environment variable not found. Please set it in your .env file or environment variables.':
|
||||
'Umgebungsvariable {{envKeyHint}} wurde nicht gefunden. Bitte legen Sie sie in Ihrer .env-Datei oder den Systemumgebungsvariablen fest.',
|
||||
'{{envKeyHint}} environment variable not found (or set settings.security.auth.apiKey). Please set it in your .env file or environment variables.':
|
||||
'Umgebungsvariable {{envKeyHint}} wurde nicht gefunden (oder setzen Sie settings.security.auth.apiKey). Bitte legen Sie sie in Ihrer .env-Datei oder den Systemumgebungsvariablen fest.',
|
||||
'Missing API key for OpenAI-compatible auth. Set the {{envKeyHint}} environment variable.':
|
||||
'API-Schlüssel für OpenAI-kompatible Authentifizierung fehlt. Setzen Sie die Umgebungsvariable {{envKeyHint}}.',
|
||||
'Anthropic provider missing required baseUrl in modelProviders[].baseUrl.':
|
||||
'Anthropic-Anbieter fehlt erforderliche baseUrl in modelProviders[].baseUrl.',
|
||||
'ANTHROPIC_BASE_URL environment variable not found.':
|
||||
'Umgebungsvariable ANTHROPIC_BASE_URL wurde nicht gefunden.',
|
||||
'Invalid auth method selected.':
|
||||
'Ungültige Authentifizierungsmethode ausgewählt.',
|
||||
'Failed to authenticate. Message: {{message}}':
|
||||
'Authentifizierung fehlgeschlagen. Meldung: {{message}}',
|
||||
'Authenticated successfully with {{authType}} credentials.':
|
||||
@@ -779,7 +820,8 @@ export default {
|
||||
'API Key:': 'API-Schlüssel:',
|
||||
'Invalid credentials: {{errorMessage}}':
|
||||
'Ungültige Anmeldedaten: {{errorMessage}}',
|
||||
'Failed to validate credentials': 'Anmeldedaten konnten nicht validiert werden',
|
||||
'Failed to validate credentials':
|
||||
'Anmeldedaten konnten nicht validiert werden',
|
||||
'Press Enter to continue, Tab/↑↓ to navigate, Esc to cancel':
|
||||
'Enter zum Fortfahren, Tab/↑↓ zum Navigieren, Esc zum Abbrechen',
|
||||
|
||||
@@ -788,6 +830,15 @@ export default {
|
||||
// ============================================================================
|
||||
'Select Model': 'Modell auswählen',
|
||||
'(Press Esc to close)': '(Esc zum Schließen drücken)',
|
||||
'Current (effective) configuration': 'Aktuelle (wirksame) Konfiguration',
|
||||
AuthType: 'Authentifizierungstyp',
|
||||
'API Key': 'API-Schlüssel',
|
||||
unset: 'nicht gesetzt',
|
||||
'(default)': '(Standard)',
|
||||
'(set)': '(gesetzt)',
|
||||
'(not set)': '(nicht gesetzt)',
|
||||
"Failed to switch model to '{{modelId}}'.\n\n{{error}}":
|
||||
"Modell konnte nicht auf '{{modelId}}' umgestellt werden.\n\n{{error}}",
|
||||
'The latest Qwen Coder model from Alibaba Cloud ModelStudio (version: qwen3-coder-plus-2025-09-23)':
|
||||
'Das neueste Qwen Coder Modell von Alibaba Cloud ModelStudio (Version: qwen3-coder-plus-2025-09-23)',
|
||||
'The latest Qwen Vision model from Alibaba Cloud ModelStudio (version: qwen3-vl-plus-2025-09-23)':
|
||||
@@ -877,8 +928,10 @@ export default {
|
||||
// ============================================================================
|
||||
// Exit Screen / Stats
|
||||
// ============================================================================
|
||||
'Agent powering down. Goodbye!': 'Agent wird heruntergefahren. Auf Wiedersehen!',
|
||||
'To continue this session, run': 'Um diese Sitzung fortzusetzen, führen Sie aus',
|
||||
'Agent powering down. Goodbye!':
|
||||
'Agent wird heruntergefahren. Auf Wiedersehen!',
|
||||
'To continue this session, run':
|
||||
'Um diese Sitzung fortzusetzen, führen Sie aus',
|
||||
'Interaction Summary': 'Interaktionszusammenfassung',
|
||||
'Session ID:': 'Sitzungs-ID:',
|
||||
'Tool Calls:': 'Werkzeugaufrufe:',
|
||||
|
||||
@@ -770,6 +770,21 @@ export default {
|
||||
'Authentication timed out. Please try again.',
|
||||
'Waiting for auth... (Press ESC or CTRL+C to cancel)':
|
||||
'Waiting for auth... (Press ESC or CTRL+C to cancel)',
|
||||
'Missing API key for OpenAI-compatible auth. Set settings.security.auth.apiKey, or set the {{envKeyHint}} environment variable.':
|
||||
'Missing API key for OpenAI-compatible auth. Set settings.security.auth.apiKey, or set the {{envKeyHint}} environment variable.',
|
||||
'{{envKeyHint}} environment variable not found.':
|
||||
'{{envKeyHint}} environment variable not found.',
|
||||
'{{envKeyHint}} environment variable not found. Please set it in your .env file or environment variables.':
|
||||
'{{envKeyHint}} environment variable not found. Please set it in your .env file or environment variables.',
|
||||
'{{envKeyHint}} environment variable not found (or set settings.security.auth.apiKey). Please set it in your .env file or environment variables.':
|
||||
'{{envKeyHint}} environment variable not found (or set settings.security.auth.apiKey). Please set it in your .env file or environment variables.',
|
||||
'Missing API key for OpenAI-compatible auth. Set the {{envKeyHint}} environment variable.':
|
||||
'Missing API key for OpenAI-compatible auth. Set the {{envKeyHint}} environment variable.',
|
||||
'Anthropic provider missing required baseUrl in modelProviders[].baseUrl.':
|
||||
'Anthropic provider missing required baseUrl in modelProviders[].baseUrl.',
|
||||
'ANTHROPIC_BASE_URL environment variable not found.':
|
||||
'ANTHROPIC_BASE_URL environment variable not found.',
|
||||
'Invalid auth method selected.': 'Invalid auth method selected.',
|
||||
'Failed to authenticate. Message: {{message}}':
|
||||
'Failed to authenticate. Message: {{message}}',
|
||||
'Authenticated successfully with {{authType}} credentials.':
|
||||
@@ -791,6 +806,15 @@ export default {
|
||||
// ============================================================================
|
||||
'Select Model': 'Select Model',
|
||||
'(Press Esc to close)': '(Press Esc to close)',
|
||||
'Current (effective) configuration': 'Current (effective) configuration',
|
||||
AuthType: 'AuthType',
|
||||
'API Key': 'API Key',
|
||||
unset: 'unset',
|
||||
'(default)': '(default)',
|
||||
'(set)': '(set)',
|
||||
'(not set)': '(not set)',
|
||||
"Failed to switch model to '{{modelId}}'.\n\n{{error}}":
|
||||
"Failed to switch model to '{{modelId}}'.\n\n{{error}}",
|
||||
'The latest Qwen Coder model from Alibaba Cloud ModelStudio (version: qwen3-coder-plus-2025-09-23)':
|
||||
'The latest Qwen Coder model from Alibaba Cloud ModelStudio (version: qwen3-coder-plus-2025-09-23)',
|
||||
'The latest Qwen Vision model from Alibaba Cloud ModelStudio (version: qwen3-vl-plus-2025-09-23)':
|
||||
|
||||
@@ -786,6 +786,21 @@ export default {
|
||||
'Время ожидания авторизации истекло. Пожалуйста, попробуйте снова.',
|
||||
'Waiting for auth... (Press ESC or CTRL+C to cancel)':
|
||||
'Ожидание авторизации... (Нажмите ESC или CTRL+C для отмены)',
|
||||
'Missing API key for OpenAI-compatible auth. Set settings.security.auth.apiKey, or set the {{envKeyHint}} environment variable.':
|
||||
'Отсутствует API-ключ для аутентификации, совместимой с OpenAI. Укажите settings.security.auth.apiKey или переменную окружения {{envKeyHint}}.',
|
||||
'{{envKeyHint}} environment variable not found.':
|
||||
'Переменная окружения {{envKeyHint}} не найдена.',
|
||||
'{{envKeyHint}} environment variable not found. Please set it in your .env file or environment variables.':
|
||||
'Переменная окружения {{envKeyHint}} не найдена. Укажите её в файле .env или среди системных переменных.',
|
||||
'{{envKeyHint}} environment variable not found (or set settings.security.auth.apiKey). Please set it in your .env file or environment variables.':
|
||||
'Переменная окружения {{envKeyHint}} не найдена (или установите settings.security.auth.apiKey). Укажите её в файле .env или среди системных переменных.',
|
||||
'Missing API key for OpenAI-compatible auth. Set the {{envKeyHint}} environment variable.':
|
||||
'Отсутствует API-ключ для аутентификации, совместимой с OpenAI. Установите переменную окружения {{envKeyHint}}.',
|
||||
'Anthropic provider missing required baseUrl in modelProviders[].baseUrl.':
|
||||
'У провайдера Anthropic отсутствует обязательный baseUrl в modelProviders[].baseUrl.',
|
||||
'ANTHROPIC_BASE_URL environment variable not found.':
|
||||
'Переменная окружения ANTHROPIC_BASE_URL не найдена.',
|
||||
'Invalid auth method selected.': 'Выбран недопустимый метод авторизации.',
|
||||
'Failed to authenticate. Message: {{message}}':
|
||||
'Не удалось авторизоваться. Сообщение: {{message}}',
|
||||
'Authenticated successfully with {{authType}} credentials.':
|
||||
@@ -807,6 +822,15 @@ export default {
|
||||
// ============================================================================
|
||||
'Select Model': 'Выбрать модель',
|
||||
'(Press Esc to close)': '(Нажмите Esc для закрытия)',
|
||||
'Current (effective) configuration': 'Текущая (фактическая) конфигурация',
|
||||
AuthType: 'Тип авторизации',
|
||||
'API Key': 'API-ключ',
|
||||
unset: 'не задано',
|
||||
'(default)': '(по умолчанию)',
|
||||
'(set)': '(установлено)',
|
||||
'(not set)': '(не задано)',
|
||||
"Failed to switch model to '{{modelId}}'.\n\n{{error}}":
|
||||
"Не удалось переключиться на модель '{{modelId}}'.\n\n{{error}}",
|
||||
'The latest Qwen Coder model from Alibaba Cloud ModelStudio (version: qwen3-coder-plus-2025-09-23)':
|
||||
'Последняя модель Qwen Coder от Alibaba Cloud ModelStudio (версия: qwen3-coder-plus-2025-09-23)',
|
||||
'The latest Qwen Vision model from Alibaba Cloud ModelStudio (version: qwen3-vl-plus-2025-09-23)':
|
||||
|
||||
@@ -728,6 +728,21 @@ export default {
|
||||
'Authentication timed out. Please try again.': '认证超时。请重试。',
|
||||
'Waiting for auth... (Press ESC or CTRL+C to cancel)':
|
||||
'正在等待认证...(按 ESC 或 CTRL+C 取消)',
|
||||
'Missing API key for OpenAI-compatible auth. Set settings.security.auth.apiKey, or set the {{envKeyHint}} environment variable.':
|
||||
'缺少 OpenAI 兼容认证的 API 密钥。请设置 settings.security.auth.apiKey 或设置 {{envKeyHint}} 环境变量。',
|
||||
'{{envKeyHint}} environment variable not found.':
|
||||
'未找到 {{envKeyHint}} 环境变量。',
|
||||
'{{envKeyHint}} environment variable not found. Please set it in your .env file or environment variables.':
|
||||
'未找到 {{envKeyHint}} 环境变量。请在 .env 文件或系统环境变量中进行设置。',
|
||||
'{{envKeyHint}} environment variable not found (or set settings.security.auth.apiKey). Please set it in your .env file or environment variables.':
|
||||
'未找到 {{envKeyHint}} 环境变量(或设置 settings.security.auth.apiKey)。请在 .env 文件或系统环境变量中进行设置。',
|
||||
'Missing API key for OpenAI-compatible auth. Set the {{envKeyHint}} environment variable.':
|
||||
'缺少 OpenAI 兼容认证的 API 密钥。请设置 {{envKeyHint}} 环境变量。',
|
||||
'Anthropic provider missing required baseUrl in modelProviders[].baseUrl.':
|
||||
'Anthropic 提供商缺少必需的 baseUrl,请在 modelProviders[].baseUrl 中配置。',
|
||||
'ANTHROPIC_BASE_URL environment variable not found.':
|
||||
'未找到 ANTHROPIC_BASE_URL 环境变量。',
|
||||
'Invalid auth method selected.': '选择了无效的认证方式。',
|
||||
'Failed to authenticate. Message: {{message}}': '认证失败。消息:{{message}}',
|
||||
'Authenticated successfully with {{authType}} credentials.':
|
||||
'使用 {{authType}} 凭据成功认证。',
|
||||
@@ -747,6 +762,15 @@ export default {
|
||||
// ============================================================================
|
||||
'Select Model': '选择模型',
|
||||
'(Press Esc to close)': '(按 Esc 关闭)',
|
||||
'Current (effective) configuration': '当前(实际生效)配置',
|
||||
AuthType: '认证方式',
|
||||
'API Key': 'API 密钥',
|
||||
unset: '未设置',
|
||||
'(default)': '(默认)',
|
||||
'(set)': '(已设置)',
|
||||
'(not set)': '(未设置)',
|
||||
"Failed to switch model to '{{modelId}}'.\n\n{{error}}":
|
||||
"无法切换到模型 '{{modelId}}'.\n\n{{error}}",
|
||||
'The latest Qwen Coder model from Alibaba Cloud ModelStudio (version: qwen3-coder-plus-2025-09-23)':
|
||||
'来自阿里云 ModelStudio 的最新 Qwen Coder 模型(版本:qwen3-coder-plus-2025-09-23)',
|
||||
'The latest Qwen Vision model from Alibaba Cloud ModelStudio (version: qwen3-vl-plus-2025-09-23)':
|
||||
|
||||
@@ -6,10 +6,12 @@
|
||||
|
||||
import { render } from 'ink-testing-library';
|
||||
import type React from 'react';
|
||||
import type { Config } from '@qwen-code/qwen-code-core';
|
||||
import { LoadedSettings } from '../config/settings.js';
|
||||
import { KeypressProvider } from '../ui/contexts/KeypressContext.js';
|
||||
import { SettingsContext } from '../ui/contexts/SettingsContext.js';
|
||||
import { ShellFocusContext } from '../ui/contexts/ShellFocusContext.js';
|
||||
import { ConfigContext } from '../ui/contexts/ConfigContext.js';
|
||||
|
||||
const mockSettings = new LoadedSettings(
|
||||
{ path: '', settings: {}, originalSettings: {} },
|
||||
@@ -22,14 +24,24 @@ const mockSettings = new LoadedSettings(
|
||||
|
||||
export const renderWithProviders = (
|
||||
component: React.ReactElement,
|
||||
{ shellFocus = true, settings = mockSettings } = {},
|
||||
{
|
||||
shellFocus = true,
|
||||
settings = mockSettings,
|
||||
config = undefined,
|
||||
}: {
|
||||
shellFocus?: boolean;
|
||||
settings?: LoadedSettings;
|
||||
config?: Config;
|
||||
} = {},
|
||||
): ReturnType<typeof render> =>
|
||||
render(
|
||||
<SettingsContext.Provider value={settings}>
|
||||
<ShellFocusContext.Provider value={shellFocus}>
|
||||
<KeypressProvider kittyProtocolEnabled={true}>
|
||||
{component}
|
||||
</KeypressProvider>
|
||||
</ShellFocusContext.Provider>
|
||||
<ConfigContext.Provider value={config}>
|
||||
<ShellFocusContext.Provider value={shellFocus}>
|
||||
<KeypressProvider kittyProtocolEnabled={true}>
|
||||
{component}
|
||||
</KeypressProvider>
|
||||
</ShellFocusContext.Provider>
|
||||
</ConfigContext.Provider>
|
||||
</SettingsContext.Provider>,
|
||||
);
|
||||
|
||||
@@ -32,7 +32,6 @@ import {
|
||||
type Config,
|
||||
type IdeInfo,
|
||||
type IdeContext,
|
||||
DEFAULT_GEMINI_FLASH_MODEL,
|
||||
IdeClient,
|
||||
ideContextStore,
|
||||
getErrorMessage,
|
||||
@@ -180,15 +179,10 @@ export const AppContainer = (props: AppContainerProps) => {
|
||||
[],
|
||||
);
|
||||
|
||||
// Helper to determine the effective model, considering the fallback state.
|
||||
const getEffectiveModel = useCallback(() => {
|
||||
if (config.isInFallbackMode()) {
|
||||
return DEFAULT_GEMINI_FLASH_MODEL;
|
||||
}
|
||||
return config.getModel();
|
||||
}, [config]);
|
||||
// Helper to determine the current model (polled, since Config has no model-change event).
|
||||
const getCurrentModel = useCallback(() => config.getModel(), [config]);
|
||||
|
||||
const [currentModel, setCurrentModel] = useState(getEffectiveModel());
|
||||
const [currentModel, setCurrentModel] = useState(getCurrentModel());
|
||||
|
||||
const [isConfigInitialized, setConfigInitialized] = useState(false);
|
||||
|
||||
@@ -241,12 +235,12 @@ export const AppContainer = (props: AppContainerProps) => {
|
||||
[historyManager.addItem],
|
||||
);
|
||||
|
||||
// Watch for model changes (e.g., from Flash fallback)
|
||||
// Watch for model changes (e.g., user switches model via /model)
|
||||
useEffect(() => {
|
||||
const checkModelChange = () => {
|
||||
const effectiveModel = getEffectiveModel();
|
||||
if (effectiveModel !== currentModel) {
|
||||
setCurrentModel(effectiveModel);
|
||||
const model = getCurrentModel();
|
||||
if (model !== currentModel) {
|
||||
setCurrentModel(model);
|
||||
}
|
||||
};
|
||||
|
||||
@@ -254,7 +248,7 @@ export const AppContainer = (props: AppContainerProps) => {
|
||||
const interval = setInterval(checkModelChange, 1000); // Check every second
|
||||
|
||||
return () => clearInterval(interval);
|
||||
}, [config, currentModel, getEffectiveModel]);
|
||||
}, [config, currentModel, getCurrentModel]);
|
||||
|
||||
const {
|
||||
consoleMessages,
|
||||
@@ -376,37 +370,36 @@ export const AppContainer = (props: AppContainerProps) => {
|
||||
// Check for enforced auth type mismatch
|
||||
useEffect(() => {
|
||||
// Check for initialization error first
|
||||
const currentAuthType = config.modelsConfig.getCurrentAuthType();
|
||||
|
||||
if (
|
||||
settings.merged.security?.auth?.enforcedType &&
|
||||
settings.merged.security?.auth.selectedType &&
|
||||
settings.merged.security?.auth.enforcedType !==
|
||||
settings.merged.security?.auth.selectedType
|
||||
currentAuthType &&
|
||||
settings.merged.security?.auth.enforcedType !== currentAuthType
|
||||
) {
|
||||
onAuthError(
|
||||
t(
|
||||
'Authentication is enforced to be {{enforcedType}}, but you are currently using {{currentType}}.',
|
||||
{
|
||||
enforcedType: settings.merged.security?.auth.enforcedType,
|
||||
currentType: settings.merged.security?.auth.selectedType,
|
||||
enforcedType: String(settings.merged.security?.auth.enforcedType),
|
||||
currentType: String(currentAuthType),
|
||||
},
|
||||
),
|
||||
);
|
||||
} else if (
|
||||
settings.merged.security?.auth?.selectedType &&
|
||||
!settings.merged.security?.auth?.useExternal
|
||||
) {
|
||||
const error = validateAuthMethod(
|
||||
settings.merged.security.auth.selectedType,
|
||||
);
|
||||
if (error) {
|
||||
onAuthError(error);
|
||||
} else if (!settings.merged.security?.auth?.useExternal) {
|
||||
// If no authType is selected yet, allow the auth UI flow to prompt the user.
|
||||
// Only validate credentials once a concrete authType exists.
|
||||
if (currentAuthType) {
|
||||
const error = validateAuthMethod(currentAuthType, config);
|
||||
if (error) {
|
||||
onAuthError(error);
|
||||
}
|
||||
}
|
||||
}
|
||||
}, [
|
||||
settings.merged.security?.auth?.selectedType,
|
||||
settings.merged.security?.auth?.enforcedType,
|
||||
settings.merged.security?.auth?.useExternal,
|
||||
config,
|
||||
onAuthError,
|
||||
]);
|
||||
|
||||
|
||||
@@ -6,7 +6,8 @@
|
||||
|
||||
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
||||
import { AuthDialog } from './AuthDialog.js';
|
||||
import { LoadedSettings, SettingScope } from '../../config/settings.js';
|
||||
import { LoadedSettings } from '../../config/settings.js';
|
||||
import type { Config } from '@qwen-code/qwen-code-core';
|
||||
import { AuthType } from '@qwen-code/qwen-code-core';
|
||||
import { renderWithProviders } from '../../test-utils/render.js';
|
||||
import { UIStateContext } from '../contexts/UIStateContext.js';
|
||||
@@ -43,17 +44,24 @@ const renderAuthDialog = (
|
||||
settings: LoadedSettings,
|
||||
uiStateOverrides: Partial<UIState> = {},
|
||||
uiActionsOverrides: Partial<UIActions> = {},
|
||||
configAuthType: AuthType | undefined = undefined,
|
||||
configApiKey: string | undefined = undefined,
|
||||
) => {
|
||||
const uiState = createMockUIState(uiStateOverrides);
|
||||
const uiActions = createMockUIActions(uiActionsOverrides);
|
||||
|
||||
const mockConfig = {
|
||||
getAuthType: vi.fn(() => configAuthType),
|
||||
getContentGeneratorConfig: vi.fn(() => ({ apiKey: configApiKey })),
|
||||
} as unknown as Config;
|
||||
|
||||
return renderWithProviders(
|
||||
<UIStateContext.Provider value={uiState}>
|
||||
<UIActionsContext.Provider value={uiActions}>
|
||||
<AuthDialog />
|
||||
</UIActionsContext.Provider>
|
||||
</UIStateContext.Provider>,
|
||||
{ settings },
|
||||
{ settings, config: mockConfig },
|
||||
);
|
||||
};
|
||||
|
||||
@@ -421,6 +429,7 @@ describe('AuthDialog', () => {
|
||||
settings,
|
||||
{},
|
||||
{ handleAuthSelect },
|
||||
undefined, // config.getAuthType() returns undefined
|
||||
);
|
||||
await wait();
|
||||
|
||||
@@ -475,6 +484,7 @@ describe('AuthDialog', () => {
|
||||
settings,
|
||||
{ authError: 'Initial error' },
|
||||
{ handleAuthSelect },
|
||||
undefined, // config.getAuthType() returns undefined
|
||||
);
|
||||
await wait();
|
||||
|
||||
@@ -528,6 +538,7 @@ describe('AuthDialog', () => {
|
||||
settings,
|
||||
{},
|
||||
{ handleAuthSelect },
|
||||
AuthType.USE_OPENAI, // config.getAuthType() returns USE_OPENAI
|
||||
);
|
||||
await wait();
|
||||
|
||||
@@ -536,7 +547,7 @@ describe('AuthDialog', () => {
|
||||
await wait();
|
||||
|
||||
// Should call handleAuthSelect with undefined to exit
|
||||
expect(handleAuthSelect).toHaveBeenCalledWith(undefined, SettingScope.User);
|
||||
expect(handleAuthSelect).toHaveBeenCalledWith(undefined);
|
||||
unmount();
|
||||
});
|
||||
});
|
||||
|
||||
@@ -8,13 +8,12 @@ import type React from 'react';
|
||||
import { useState } from 'react';
|
||||
import { AuthType } from '@qwen-code/qwen-code-core';
|
||||
import { Box, Text } from 'ink';
|
||||
import { SettingScope } from '../../config/settings.js';
|
||||
import { Colors } from '../colors.js';
|
||||
import { useKeypress } from '../hooks/useKeypress.js';
|
||||
import { RadioButtonSelect } from '../components/shared/RadioButtonSelect.js';
|
||||
import { useUIState } from '../contexts/UIStateContext.js';
|
||||
import { useUIActions } from '../contexts/UIActionsContext.js';
|
||||
import { useSettings } from '../contexts/SettingsContext.js';
|
||||
import { useConfig } from '../contexts/ConfigContext.js';
|
||||
import { t } from '../../i18n/index.js';
|
||||
|
||||
function parseDefaultAuthType(
|
||||
@@ -32,7 +31,7 @@ function parseDefaultAuthType(
|
||||
export function AuthDialog(): React.JSX.Element {
|
||||
const { pendingAuthType, authError } = useUIState();
|
||||
const { handleAuthSelect: onAuthSelect } = useUIActions();
|
||||
const settings = useSettings();
|
||||
const config = useConfig();
|
||||
|
||||
const [errorMessage, setErrorMessage] = useState<string | null>(null);
|
||||
const [selectedIndex, setSelectedIndex] = useState<number | null>(null);
|
||||
@@ -58,9 +57,10 @@ export function AuthDialog(): React.JSX.Element {
|
||||
return item.value === pendingAuthType;
|
||||
}
|
||||
|
||||
// Priority 2: settings.merged.security?.auth?.selectedType
|
||||
if (settings.merged.security?.auth?.selectedType) {
|
||||
return item.value === settings.merged.security?.auth?.selectedType;
|
||||
// Priority 2: config.getAuthType() - the source of truth
|
||||
const currentAuthType = config.getAuthType();
|
||||
if (currentAuthType) {
|
||||
return item.value === currentAuthType;
|
||||
}
|
||||
|
||||
// Priority 3: QWEN_DEFAULT_AUTH_TYPE env var
|
||||
@@ -76,7 +76,7 @@ export function AuthDialog(): React.JSX.Element {
|
||||
}),
|
||||
);
|
||||
|
||||
const hasApiKey = Boolean(settings.merged.security?.auth?.apiKey);
|
||||
const hasApiKey = Boolean(config.getContentGeneratorConfig()?.apiKey);
|
||||
const currentSelectedAuthType =
|
||||
selectedIndex !== null
|
||||
? items[selectedIndex]?.value
|
||||
@@ -84,7 +84,7 @@ export function AuthDialog(): React.JSX.Element {
|
||||
|
||||
const handleAuthSelect = async (authMethod: AuthType) => {
|
||||
setErrorMessage(null);
|
||||
await onAuthSelect(authMethod, SettingScope.User);
|
||||
await onAuthSelect(authMethod);
|
||||
};
|
||||
|
||||
const handleHighlight = (authMethod: AuthType) => {
|
||||
@@ -100,7 +100,7 @@ export function AuthDialog(): React.JSX.Element {
|
||||
if (errorMessage) {
|
||||
return;
|
||||
}
|
||||
if (settings.merged.security?.auth?.selectedType === undefined) {
|
||||
if (config.getAuthType() === undefined) {
|
||||
// Prevent exiting if no auth method is set
|
||||
setErrorMessage(
|
||||
t(
|
||||
@@ -109,7 +109,7 @@ export function AuthDialog(): React.JSX.Element {
|
||||
);
|
||||
return;
|
||||
}
|
||||
onAuthSelect(undefined, SettingScope.User);
|
||||
onAuthSelect(undefined);
|
||||
}
|
||||
},
|
||||
{ isActive: true },
|
||||
|
||||
@@ -4,16 +4,16 @@
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import type { Config } from '@qwen-code/qwen-code-core';
|
||||
import type { Config, ModelProvidersConfig } from '@qwen-code/qwen-code-core';
|
||||
import {
|
||||
AuthEvent,
|
||||
AuthType,
|
||||
clearCachedCredentialFile,
|
||||
getErrorMessage,
|
||||
logAuth,
|
||||
} from '@qwen-code/qwen-code-core';
|
||||
import { useCallback, useEffect, useState } from 'react';
|
||||
import type { LoadedSettings, SettingScope } from '../../config/settings.js';
|
||||
import type { LoadedSettings } from '../../config/settings.js';
|
||||
import { getPersistScopeForModelSelection } from '../../config/modelProvidersScope.js';
|
||||
import type { OpenAICredentials } from '../components/OpenAIKeyPrompt.js';
|
||||
import { useQwenAuth } from '../hooks/useQwenAuth.js';
|
||||
import { AuthState, MessageType } from '../types.js';
|
||||
@@ -27,8 +27,7 @@ export const useAuthCommand = (
|
||||
config: Config,
|
||||
addItem: (item: Omit<HistoryItem, 'id'>, timestamp: number) => void,
|
||||
) => {
|
||||
const unAuthenticated =
|
||||
settings.merged.security?.auth?.selectedType === undefined;
|
||||
const unAuthenticated = config.getAuthType() === undefined;
|
||||
|
||||
const [authState, setAuthState] = useState<AuthState>(
|
||||
unAuthenticated ? AuthState.Updating : AuthState.Unauthenticated,
|
||||
@@ -81,35 +80,35 @@ export const useAuthCommand = (
|
||||
);
|
||||
|
||||
const handleAuthSuccess = useCallback(
|
||||
async (
|
||||
authType: AuthType,
|
||||
scope: SettingScope,
|
||||
credentials?: OpenAICredentials,
|
||||
) => {
|
||||
async (authType: AuthType, credentials?: OpenAICredentials) => {
|
||||
try {
|
||||
settings.setValue(scope, 'security.auth.selectedType', authType);
|
||||
const authTypeScope = getPersistScopeForModelSelection(settings);
|
||||
settings.setValue(
|
||||
authTypeScope,
|
||||
'security.auth.selectedType',
|
||||
authType,
|
||||
);
|
||||
|
||||
// Only update credentials if not switching to QWEN_OAUTH,
|
||||
// so that OpenAI credentials are preserved when switching to QWEN_OAUTH.
|
||||
if (authType !== AuthType.QWEN_OAUTH && credentials) {
|
||||
if (credentials?.apiKey != null) {
|
||||
settings.setValue(
|
||||
scope,
|
||||
authTypeScope,
|
||||
'security.auth.apiKey',
|
||||
credentials.apiKey,
|
||||
);
|
||||
}
|
||||
if (credentials?.baseUrl != null) {
|
||||
settings.setValue(
|
||||
scope,
|
||||
authTypeScope,
|
||||
'security.auth.baseUrl',
|
||||
credentials.baseUrl,
|
||||
);
|
||||
}
|
||||
if (credentials?.model != null) {
|
||||
settings.setValue(scope, 'model.name', credentials.model);
|
||||
settings.setValue(authTypeScope, 'model.name', credentials.model);
|
||||
}
|
||||
await clearCachedCredentialFile();
|
||||
}
|
||||
} catch (error) {
|
||||
handleAuthFailure(error);
|
||||
@@ -141,14 +140,10 @@ export const useAuthCommand = (
|
||||
);
|
||||
|
||||
const performAuth = useCallback(
|
||||
async (
|
||||
authType: AuthType,
|
||||
scope: SettingScope,
|
||||
credentials?: OpenAICredentials,
|
||||
) => {
|
||||
async (authType: AuthType, credentials?: OpenAICredentials) => {
|
||||
try {
|
||||
await config.refreshAuth(authType);
|
||||
handleAuthSuccess(authType, scope, credentials);
|
||||
handleAuthSuccess(authType, credentials);
|
||||
} catch (e) {
|
||||
handleAuthFailure(e);
|
||||
}
|
||||
@@ -156,18 +151,51 @@ export const useAuthCommand = (
|
||||
[config, handleAuthSuccess, handleAuthFailure],
|
||||
);
|
||||
|
||||
const isProviderManagedModel = useCallback(
|
||||
(authType: AuthType, modelId: string | undefined) => {
|
||||
if (!modelId) {
|
||||
return false;
|
||||
}
|
||||
|
||||
const modelProviders = settings.merged.modelProviders as
|
||||
| ModelProvidersConfig
|
||||
| undefined;
|
||||
if (!modelProviders) {
|
||||
return false;
|
||||
}
|
||||
const providerModels = modelProviders[authType];
|
||||
if (!Array.isArray(providerModels)) {
|
||||
return false;
|
||||
}
|
||||
return providerModels.some(
|
||||
(providerModel) => providerModel.id === modelId,
|
||||
);
|
||||
},
|
||||
[settings],
|
||||
);
|
||||
|
||||
const handleAuthSelect = useCallback(
|
||||
async (
|
||||
authType: AuthType | undefined,
|
||||
scope: SettingScope,
|
||||
credentials?: OpenAICredentials,
|
||||
) => {
|
||||
async (authType: AuthType | undefined, credentials?: OpenAICredentials) => {
|
||||
if (!authType) {
|
||||
setIsAuthDialogOpen(false);
|
||||
setAuthError(null);
|
||||
return;
|
||||
}
|
||||
|
||||
if (
|
||||
authType === AuthType.USE_OPENAI &&
|
||||
credentials?.model &&
|
||||
isProviderManagedModel(authType, credentials.model)
|
||||
) {
|
||||
onAuthError(
|
||||
t(
|
||||
'Model "{{modelName}}" is managed via settings.modelProviders. Please complete the fields in settings, or use another model id.',
|
||||
{ modelName: credentials.model },
|
||||
),
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
setPendingAuthType(authType);
|
||||
setAuthError(null);
|
||||
setIsAuthDialogOpen(false);
|
||||
@@ -180,14 +208,14 @@ export const useAuthCommand = (
|
||||
baseUrl: credentials.baseUrl,
|
||||
model: credentials.model,
|
||||
});
|
||||
await performAuth(authType, scope, credentials);
|
||||
await performAuth(authType, credentials);
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
await performAuth(authType, scope);
|
||||
await performAuth(authType);
|
||||
},
|
||||
[config, performAuth],
|
||||
[config, performAuth, isProviderManagedModel, onAuthError],
|
||||
);
|
||||
|
||||
const openAuthDialog = useCallback(() => {
|
||||
|
||||
@@ -11,9 +11,14 @@ import type { SlashCommand, type CommandContext } from './types.js';
|
||||
import { createMockCommandContext } from '../../test-utils/mockCommandContext.js';
|
||||
import { MessageType } from '../types.js';
|
||||
import type { LoadedSettings } from '../../config/settings.js';
|
||||
import { readFile } from 'node:fs/promises';
|
||||
import os from 'node:os';
|
||||
import path from 'node:path';
|
||||
import {
|
||||
getErrorMessage,
|
||||
loadServerHierarchicalMemory,
|
||||
QWEN_DIR,
|
||||
setGeminiMdFilename,
|
||||
type FileDiscoveryService,
|
||||
type LoadServerHierarchicalMemoryResponse,
|
||||
} from '@qwen-code/qwen-code-core';
|
||||
@@ -31,7 +36,18 @@ vi.mock('@qwen-code/qwen-code-core', async (importOriginal) => {
|
||||
};
|
||||
});
|
||||
|
||||
vi.mock('node:fs/promises', () => {
|
||||
const readFile = vi.fn();
|
||||
return {
|
||||
readFile,
|
||||
default: {
|
||||
readFile,
|
||||
},
|
||||
};
|
||||
});
|
||||
|
||||
const mockLoadServerHierarchicalMemory = loadServerHierarchicalMemory as Mock;
|
||||
const mockReadFile = readFile as unknown as Mock;
|
||||
|
||||
describe('memoryCommand', () => {
|
||||
let mockContext: CommandContext;
|
||||
@@ -52,6 +68,10 @@ describe('memoryCommand', () => {
|
||||
let mockGetGeminiMdFileCount: Mock;
|
||||
|
||||
beforeEach(() => {
|
||||
setGeminiMdFilename('QWEN.md');
|
||||
mockReadFile.mockReset();
|
||||
vi.restoreAllMocks();
|
||||
|
||||
showCommand = getSubCommand('show');
|
||||
|
||||
mockGetUserMemory = vi.fn();
|
||||
@@ -102,6 +122,52 @@ describe('memoryCommand', () => {
|
||||
expect.any(Number),
|
||||
);
|
||||
});
|
||||
|
||||
it('should show project memory from the configured context file', async () => {
|
||||
const projectCommand = showCommand.subCommands?.find(
|
||||
(cmd) => cmd.name === '--project',
|
||||
);
|
||||
if (!projectCommand?.action) throw new Error('Command has no action');
|
||||
|
||||
setGeminiMdFilename('AGENTS.md');
|
||||
vi.spyOn(process, 'cwd').mockReturnValue('/test/project');
|
||||
mockReadFile.mockResolvedValue('project memory');
|
||||
|
||||
await projectCommand.action(mockContext, '');
|
||||
|
||||
const expectedProjectPath = path.join('/test/project', 'AGENTS.md');
|
||||
expect(mockReadFile).toHaveBeenCalledWith(expectedProjectPath, 'utf-8');
|
||||
expect(mockContext.ui.addItem).toHaveBeenCalledWith(
|
||||
{
|
||||
type: MessageType.INFO,
|
||||
text: expect.stringContaining(expectedProjectPath),
|
||||
},
|
||||
expect.any(Number),
|
||||
);
|
||||
});
|
||||
|
||||
it('should show global memory from the configured context file', async () => {
|
||||
const globalCommand = showCommand.subCommands?.find(
|
||||
(cmd) => cmd.name === '--global',
|
||||
);
|
||||
if (!globalCommand?.action) throw new Error('Command has no action');
|
||||
|
||||
setGeminiMdFilename('AGENTS.md');
|
||||
vi.spyOn(os, 'homedir').mockReturnValue('/home/user');
|
||||
mockReadFile.mockResolvedValue('global memory');
|
||||
|
||||
await globalCommand.action(mockContext, '');
|
||||
|
||||
const expectedGlobalPath = path.join('/home/user', QWEN_DIR, 'AGENTS.md');
|
||||
expect(mockReadFile).toHaveBeenCalledWith(expectedGlobalPath, 'utf-8');
|
||||
expect(mockContext.ui.addItem).toHaveBeenCalledWith(
|
||||
{
|
||||
type: MessageType.INFO,
|
||||
text: expect.stringContaining('Global memory content'),
|
||||
},
|
||||
expect.any(Number),
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('/memory add', () => {
|
||||
|
||||
@@ -6,12 +6,13 @@
|
||||
|
||||
import {
|
||||
getErrorMessage,
|
||||
getCurrentGeminiMdFilename,
|
||||
loadServerHierarchicalMemory,
|
||||
QWEN_DIR,
|
||||
} from '@qwen-code/qwen-code-core';
|
||||
import path from 'node:path';
|
||||
import os from 'os';
|
||||
import fs from 'fs/promises';
|
||||
import os from 'node:os';
|
||||
import fs from 'node:fs/promises';
|
||||
import { MessageType } from '../types.js';
|
||||
import type { SlashCommand, SlashCommandActionReturn } from './types.js';
|
||||
import { CommandKind } from './types.js';
|
||||
@@ -56,7 +57,12 @@ export const memoryCommand: SlashCommand = {
|
||||
kind: CommandKind.BUILT_IN,
|
||||
action: async (context) => {
|
||||
try {
|
||||
const projectMemoryPath = path.join(process.cwd(), 'QWEN.md');
|
||||
const workingDir =
|
||||
context.services.config?.getWorkingDir?.() ?? process.cwd();
|
||||
const projectMemoryPath = path.join(
|
||||
workingDir,
|
||||
getCurrentGeminiMdFilename(),
|
||||
);
|
||||
const memoryContent = await fs.readFile(
|
||||
projectMemoryPath,
|
||||
'utf-8',
|
||||
@@ -104,7 +110,7 @@ export const memoryCommand: SlashCommand = {
|
||||
const globalMemoryPath = path.join(
|
||||
os.homedir(),
|
||||
QWEN_DIR,
|
||||
'QWEN.md',
|
||||
getCurrentGeminiMdFilename(),
|
||||
);
|
||||
const globalMemoryContent = await fs.readFile(
|
||||
globalMemoryPath,
|
||||
|
||||
@@ -13,12 +13,6 @@ import {
|
||||
type ContentGeneratorConfig,
|
||||
type Config,
|
||||
} from '@qwen-code/qwen-code-core';
|
||||
import * as availableModelsModule from '../models/availableModels.js';
|
||||
|
||||
// Mock the availableModels module
|
||||
vi.mock('../models/availableModels.js', () => ({
|
||||
getAvailableModelsForAuthType: vi.fn(),
|
||||
}));
|
||||
|
||||
// Helper function to create a mock config
|
||||
function createMockConfig(
|
||||
@@ -31,9 +25,6 @@ function createMockConfig(
|
||||
|
||||
describe('modelCommand', () => {
|
||||
let mockContext: CommandContext;
|
||||
const mockGetAvailableModelsForAuthType = vi.mocked(
|
||||
availableModelsModule.getAvailableModelsForAuthType,
|
||||
);
|
||||
|
||||
beforeEach(() => {
|
||||
mockContext = createMockCommandContext();
|
||||
@@ -87,10 +78,6 @@ describe('modelCommand', () => {
|
||||
});
|
||||
|
||||
it('should return dialog action for QWEN_OAUTH auth type', async () => {
|
||||
mockGetAvailableModelsForAuthType.mockReturnValue([
|
||||
{ id: 'qwen3-coder-plus', label: 'qwen3-coder-plus' },
|
||||
]);
|
||||
|
||||
const mockConfig = createMockConfig({
|
||||
model: 'test-model',
|
||||
authType: AuthType.QWEN_OAUTH,
|
||||
@@ -105,11 +92,7 @@ describe('modelCommand', () => {
|
||||
});
|
||||
});
|
||||
|
||||
it('should return dialog action for USE_OPENAI auth type when model is available', async () => {
|
||||
mockGetAvailableModelsForAuthType.mockReturnValue([
|
||||
{ id: 'gpt-4', label: 'gpt-4' },
|
||||
]);
|
||||
|
||||
it('should return dialog action for USE_OPENAI auth type', async () => {
|
||||
const mockConfig = createMockConfig({
|
||||
model: 'test-model',
|
||||
authType: AuthType.USE_OPENAI,
|
||||
@@ -124,28 +107,7 @@ describe('modelCommand', () => {
|
||||
});
|
||||
});
|
||||
|
||||
it('should return error for USE_OPENAI auth type when no model is available', async () => {
|
||||
mockGetAvailableModelsForAuthType.mockReturnValue([]);
|
||||
|
||||
const mockConfig = createMockConfig({
|
||||
model: 'test-model',
|
||||
authType: AuthType.USE_OPENAI,
|
||||
});
|
||||
mockContext.services.config = mockConfig as Config;
|
||||
|
||||
const result = await modelCommand.action!(mockContext, '');
|
||||
|
||||
expect(result).toEqual({
|
||||
type: 'message',
|
||||
messageType: 'error',
|
||||
content:
|
||||
'No models available for the current authentication type (openai).',
|
||||
});
|
||||
});
|
||||
|
||||
it('should return error for unsupported auth types', async () => {
|
||||
mockGetAvailableModelsForAuthType.mockReturnValue([]);
|
||||
|
||||
it('should return dialog action for unsupported auth types', async () => {
|
||||
const mockConfig = createMockConfig({
|
||||
model: 'test-model',
|
||||
authType: 'UNSUPPORTED_AUTH_TYPE' as AuthType,
|
||||
@@ -155,10 +117,8 @@ describe('modelCommand', () => {
|
||||
const result = await modelCommand.action!(mockContext, '');
|
||||
|
||||
expect(result).toEqual({
|
||||
type: 'message',
|
||||
messageType: 'error',
|
||||
content:
|
||||
'No models available for the current authentication type (UNSUPPORTED_AUTH_TYPE).',
|
||||
type: 'dialog',
|
||||
dialog: 'model',
|
||||
});
|
||||
});
|
||||
|
||||
|
||||
@@ -11,7 +11,6 @@ import type {
|
||||
MessageActionReturn,
|
||||
} from './types.js';
|
||||
import { CommandKind } from './types.js';
|
||||
import { getAvailableModelsForAuthType } from '../models/availableModels.js';
|
||||
import { t } from '../../i18n/index.js';
|
||||
|
||||
export const modelCommand: SlashCommand = {
|
||||
@@ -30,7 +29,7 @@ export const modelCommand: SlashCommand = {
|
||||
return {
|
||||
type: 'message',
|
||||
messageType: 'error',
|
||||
content: 'Configuration not available.',
|
||||
content: t('Configuration not available.'),
|
||||
};
|
||||
}
|
||||
|
||||
@@ -52,22 +51,6 @@ export const modelCommand: SlashCommand = {
|
||||
};
|
||||
}
|
||||
|
||||
const availableModels = getAvailableModelsForAuthType(authType);
|
||||
|
||||
if (availableModels.length === 0) {
|
||||
return {
|
||||
type: 'message',
|
||||
messageType: 'error',
|
||||
content: t(
|
||||
'No models available for the current authentication type ({{authType}}).',
|
||||
{
|
||||
authType,
|
||||
},
|
||||
),
|
||||
};
|
||||
}
|
||||
|
||||
// Trigger model selection dialog
|
||||
return {
|
||||
type: 'dialog',
|
||||
dialog: 'model',
|
||||
|
||||
@@ -54,7 +54,7 @@ export function ApprovalModeDialog({
|
||||
}: ApprovalModeDialogProps): React.JSX.Element {
|
||||
// Start with User scope by default
|
||||
const [selectedScope, setSelectedScope] = useState<SettingScope>(
|
||||
SettingScope.User,
|
||||
SettingScope.Workspace,
|
||||
);
|
||||
|
||||
// Track the currently highlighted approval mode
|
||||
|
||||
@@ -25,7 +25,6 @@ import { useUIState } from '../contexts/UIStateContext.js';
|
||||
import { useUIActions } from '../contexts/UIActionsContext.js';
|
||||
import { useConfig } from '../contexts/ConfigContext.js';
|
||||
import { useSettings } from '../contexts/SettingsContext.js';
|
||||
import { SettingScope } from '../../config/settings.js';
|
||||
import { AuthState } from '../types.js';
|
||||
import { AuthType } from '@qwen-code/qwen-code-core';
|
||||
import process from 'node:process';
|
||||
@@ -202,7 +201,7 @@ export const DialogManager = ({
|
||||
return (
|
||||
<OpenAIKeyPrompt
|
||||
onSubmit={(apiKey, baseUrl, model) => {
|
||||
uiActions.handleAuthSelect(AuthType.USE_OPENAI, SettingScope.User, {
|
||||
uiActions.handleAuthSelect(AuthType.USE_OPENAI, {
|
||||
apiKey,
|
||||
baseUrl,
|
||||
model,
|
||||
|
||||
@@ -10,7 +10,11 @@ import { ModelDialog } from './ModelDialog.js';
|
||||
import { useKeypress } from '../hooks/useKeypress.js';
|
||||
import { DescriptiveRadioButtonSelect } from './shared/DescriptiveRadioButtonSelect.js';
|
||||
import { ConfigContext } from '../contexts/ConfigContext.js';
|
||||
import { SettingsContext } from '../contexts/SettingsContext.js';
|
||||
import type { Config } from '@qwen-code/qwen-code-core';
|
||||
import { AuthType } from '@qwen-code/qwen-code-core';
|
||||
import type { LoadedSettings } from '../../config/settings.js';
|
||||
import { SettingScope } from '../../config/settings.js';
|
||||
import {
|
||||
AVAILABLE_MODELS_QWEN,
|
||||
MAINLINE_CODER,
|
||||
@@ -36,18 +40,29 @@ const renderComponent = (
|
||||
};
|
||||
const combinedProps = { ...defaultProps, ...props };
|
||||
|
||||
const mockSettings = {
|
||||
isTrusted: true,
|
||||
user: { settings: {} },
|
||||
workspace: { settings: {} },
|
||||
setValue: vi.fn(),
|
||||
} as unknown as LoadedSettings;
|
||||
|
||||
const mockConfig = contextValue
|
||||
? ({
|
||||
// --- Functions used by ModelDialog ---
|
||||
getModel: vi.fn(() => MAINLINE_CODER),
|
||||
setModel: vi.fn(),
|
||||
setModel: vi.fn().mockResolvedValue(undefined),
|
||||
switchModel: vi.fn().mockResolvedValue(undefined),
|
||||
getAuthType: vi.fn(() => 'qwen-oauth'),
|
||||
|
||||
// --- Functions used by ClearcutLogger ---
|
||||
getUsageStatisticsEnabled: vi.fn(() => true),
|
||||
getSessionId: vi.fn(() => 'mock-session-id'),
|
||||
getDebugMode: vi.fn(() => false),
|
||||
getContentGeneratorConfig: vi.fn(() => ({ authType: 'mock' })),
|
||||
getContentGeneratorConfig: vi.fn(() => ({
|
||||
authType: AuthType.QWEN_OAUTH,
|
||||
model: MAINLINE_CODER,
|
||||
})),
|
||||
getUseSmartEdit: vi.fn(() => false),
|
||||
getUseModelRouter: vi.fn(() => false),
|
||||
getProxy: vi.fn(() => undefined),
|
||||
@@ -58,21 +73,27 @@ const renderComponent = (
|
||||
: undefined;
|
||||
|
||||
const renderResult = render(
|
||||
<ConfigContext.Provider value={mockConfig}>
|
||||
<ModelDialog {...combinedProps} />
|
||||
</ConfigContext.Provider>,
|
||||
<SettingsContext.Provider value={mockSettings}>
|
||||
<ConfigContext.Provider value={mockConfig}>
|
||||
<ModelDialog {...combinedProps} />
|
||||
</ConfigContext.Provider>
|
||||
</SettingsContext.Provider>,
|
||||
);
|
||||
|
||||
return {
|
||||
...renderResult,
|
||||
props: combinedProps,
|
||||
mockConfig,
|
||||
mockSettings,
|
||||
};
|
||||
};
|
||||
|
||||
describe('<ModelDialog />', () => {
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
// Ensure env-based fallback models don't leak into this suite from the developer environment.
|
||||
delete process.env['OPENAI_MODEL'];
|
||||
delete process.env['ANTHROPIC_MODEL'];
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
@@ -91,8 +112,12 @@ describe('<ModelDialog />', () => {
|
||||
|
||||
const props = mockedSelect.mock.calls[0][0];
|
||||
expect(props.items).toHaveLength(AVAILABLE_MODELS_QWEN.length);
|
||||
expect(props.items[0].value).toBe(MAINLINE_CODER);
|
||||
expect(props.items[1].value).toBe(MAINLINE_VLM);
|
||||
expect(props.items[0].value).toBe(
|
||||
`${AuthType.QWEN_OAUTH}::${MAINLINE_CODER}`,
|
||||
);
|
||||
expect(props.items[1].value).toBe(
|
||||
`${AuthType.QWEN_OAUTH}::${MAINLINE_VLM}`,
|
||||
);
|
||||
expect(props.showNumbers).toBe(true);
|
||||
});
|
||||
|
||||
@@ -139,16 +164,93 @@ describe('<ModelDialog />', () => {
|
||||
expect(mockedSelect).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
it('calls config.setModel and onClose when DescriptiveRadioButtonSelect.onSelect is triggered', () => {
|
||||
const { props, mockConfig } = renderComponent({}, {}); // Pass empty object for contextValue
|
||||
it('calls config.switchModel and onClose when DescriptiveRadioButtonSelect.onSelect is triggered', async () => {
|
||||
const { props, mockConfig, mockSettings } = renderComponent({}, {}); // Pass empty object for contextValue
|
||||
|
||||
const childOnSelect = mockedSelect.mock.calls[0][0].onSelect;
|
||||
expect(childOnSelect).toBeDefined();
|
||||
|
||||
childOnSelect(MAINLINE_CODER);
|
||||
await childOnSelect(`${AuthType.QWEN_OAUTH}::${MAINLINE_CODER}`);
|
||||
|
||||
// Assert against the default mock provided by renderComponent
|
||||
expect(mockConfig?.setModel).toHaveBeenCalledWith(MAINLINE_CODER);
|
||||
expect(mockConfig?.switchModel).toHaveBeenCalledWith(
|
||||
AuthType.QWEN_OAUTH,
|
||||
MAINLINE_CODER,
|
||||
undefined,
|
||||
{
|
||||
reason: 'user_manual',
|
||||
context: 'Model switched via /model dialog',
|
||||
},
|
||||
);
|
||||
expect(mockSettings.setValue).toHaveBeenCalledWith(
|
||||
SettingScope.User,
|
||||
'model.name',
|
||||
MAINLINE_CODER,
|
||||
);
|
||||
expect(mockSettings.setValue).toHaveBeenCalledWith(
|
||||
SettingScope.User,
|
||||
'security.auth.selectedType',
|
||||
AuthType.QWEN_OAUTH,
|
||||
);
|
||||
expect(props.onClose).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
it('calls config.switchModel and persists authType+model when selecting a different authType', async () => {
|
||||
const switchModel = vi.fn().mockResolvedValue(undefined);
|
||||
const getAuthType = vi.fn(() => AuthType.USE_OPENAI);
|
||||
const getAvailableModelsForAuthType = vi.fn((t: AuthType) => {
|
||||
if (t === AuthType.USE_OPENAI) {
|
||||
return [{ id: 'gpt-4', label: 'GPT-4', authType: t }];
|
||||
}
|
||||
if (t === AuthType.QWEN_OAUTH) {
|
||||
return AVAILABLE_MODELS_QWEN.map((m) => ({
|
||||
id: m.id,
|
||||
label: m.label,
|
||||
authType: AuthType.QWEN_OAUTH,
|
||||
}));
|
||||
}
|
||||
return [];
|
||||
});
|
||||
|
||||
const mockConfigWithSwitchAuthType = {
|
||||
getAuthType,
|
||||
getModel: vi.fn(() => 'gpt-4'),
|
||||
getContentGeneratorConfig: vi.fn(() => ({
|
||||
authType: AuthType.QWEN_OAUTH,
|
||||
model: MAINLINE_CODER,
|
||||
})),
|
||||
// Add switchModel to the mock object (not the type)
|
||||
switchModel,
|
||||
getAvailableModelsForAuthType,
|
||||
};
|
||||
|
||||
const { props, mockSettings } = renderComponent(
|
||||
{},
|
||||
// Cast to Config to bypass type checking, matching the runtime behavior
|
||||
mockConfigWithSwitchAuthType as unknown as Partial<Config>,
|
||||
);
|
||||
|
||||
const childOnSelect = mockedSelect.mock.calls[0][0].onSelect;
|
||||
await childOnSelect(`${AuthType.QWEN_OAUTH}::${MAINLINE_CODER}`);
|
||||
|
||||
expect(switchModel).toHaveBeenCalledWith(
|
||||
AuthType.QWEN_OAUTH,
|
||||
MAINLINE_CODER,
|
||||
{ requireCachedCredentials: true },
|
||||
{
|
||||
reason: 'user_manual',
|
||||
context: 'AuthType+model switched via /model dialog',
|
||||
},
|
||||
);
|
||||
expect(mockSettings.setValue).toHaveBeenCalledWith(
|
||||
SettingScope.User,
|
||||
'model.name',
|
||||
MAINLINE_CODER,
|
||||
);
|
||||
expect(mockSettings.setValue).toHaveBeenCalledWith(
|
||||
SettingScope.User,
|
||||
'security.auth.selectedType',
|
||||
AuthType.QWEN_OAUTH,
|
||||
);
|
||||
expect(props.onClose).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
@@ -193,17 +295,25 @@ describe('<ModelDialog />', () => {
|
||||
it('updates initialIndex when config context changes', () => {
|
||||
const mockGetModel = vi.fn(() => MAINLINE_CODER);
|
||||
const mockGetAuthType = vi.fn(() => 'qwen-oauth');
|
||||
const mockSettings = {
|
||||
isTrusted: true,
|
||||
user: { settings: {} },
|
||||
workspace: { settings: {} },
|
||||
setValue: vi.fn(),
|
||||
} as unknown as LoadedSettings;
|
||||
const { rerender } = render(
|
||||
<ConfigContext.Provider
|
||||
value={
|
||||
{
|
||||
getModel: mockGetModel,
|
||||
getAuthType: mockGetAuthType,
|
||||
} as unknown as Config
|
||||
}
|
||||
>
|
||||
<ModelDialog onClose={vi.fn()} />
|
||||
</ConfigContext.Provider>,
|
||||
<SettingsContext.Provider value={mockSettings}>
|
||||
<ConfigContext.Provider
|
||||
value={
|
||||
{
|
||||
getModel: mockGetModel,
|
||||
getAuthType: mockGetAuthType,
|
||||
} as unknown as Config
|
||||
}
|
||||
>
|
||||
<ModelDialog onClose={vi.fn()} />
|
||||
</ConfigContext.Provider>
|
||||
</SettingsContext.Provider>,
|
||||
);
|
||||
|
||||
expect(mockedSelect.mock.calls[0][0].initialIndex).toBe(0);
|
||||
@@ -215,9 +325,11 @@ describe('<ModelDialog />', () => {
|
||||
} as unknown as Config;
|
||||
|
||||
rerender(
|
||||
<ConfigContext.Provider value={newMockConfig}>
|
||||
<ModelDialog onClose={vi.fn()} />
|
||||
</ConfigContext.Provider>,
|
||||
<SettingsContext.Provider value={mockSettings}>
|
||||
<ConfigContext.Provider value={newMockConfig}>
|
||||
<ModelDialog onClose={vi.fn()} />
|
||||
</ConfigContext.Provider>
|
||||
</SettingsContext.Provider>,
|
||||
);
|
||||
|
||||
// Should be called at least twice: initial render + re-render after context change
|
||||
|
||||
@@ -5,52 +5,210 @@
|
||||
*/
|
||||
|
||||
import type React from 'react';
|
||||
import { useCallback, useContext, useMemo } from 'react';
|
||||
import { useCallback, useContext, useMemo, useState } from 'react';
|
||||
import { Box, Text } from 'ink';
|
||||
import {
|
||||
AuthType,
|
||||
ModelSlashCommandEvent,
|
||||
logModelSlashCommand,
|
||||
type ContentGeneratorConfig,
|
||||
type ContentGeneratorConfigSource,
|
||||
type ContentGeneratorConfigSources,
|
||||
} from '@qwen-code/qwen-code-core';
|
||||
import { useKeypress } from '../hooks/useKeypress.js';
|
||||
import { theme } from '../semantic-colors.js';
|
||||
import { DescriptiveRadioButtonSelect } from './shared/DescriptiveRadioButtonSelect.js';
|
||||
import { ConfigContext } from '../contexts/ConfigContext.js';
|
||||
import { UIStateContext } from '../contexts/UIStateContext.js';
|
||||
import { useSettings } from '../contexts/SettingsContext.js';
|
||||
import {
|
||||
getAvailableModelsForAuthType,
|
||||
MAINLINE_CODER,
|
||||
} from '../models/availableModels.js';
|
||||
import { getPersistScopeForModelSelection } from '../../config/modelProvidersScope.js';
|
||||
import { t } from '../../i18n/index.js';
|
||||
|
||||
interface ModelDialogProps {
|
||||
onClose: () => void;
|
||||
}
|
||||
|
||||
function formatSourceBadge(
|
||||
source: ContentGeneratorConfigSource | undefined,
|
||||
): string | undefined {
|
||||
if (!source) return undefined;
|
||||
|
||||
switch (source.kind) {
|
||||
case 'cli':
|
||||
return source.detail ? `CLI ${source.detail}` : 'CLI';
|
||||
case 'env':
|
||||
return source.envKey ? `ENV ${source.envKey}` : 'ENV';
|
||||
case 'settings':
|
||||
return source.settingsPath
|
||||
? `Settings ${source.settingsPath}`
|
||||
: 'Settings';
|
||||
case 'modelProviders': {
|
||||
const suffix =
|
||||
source.authType && source.modelId
|
||||
? `${source.authType}:${source.modelId}`
|
||||
: source.authType
|
||||
? `${source.authType}`
|
||||
: source.modelId
|
||||
? `${source.modelId}`
|
||||
: '';
|
||||
return suffix ? `ModelProviders ${suffix}` : 'ModelProviders';
|
||||
}
|
||||
case 'default':
|
||||
return source.detail ? `Default ${source.detail}` : 'Default';
|
||||
case 'computed':
|
||||
return source.detail ? `Computed ${source.detail}` : 'Computed';
|
||||
case 'programmatic':
|
||||
return source.detail ? `Programmatic ${source.detail}` : 'Programmatic';
|
||||
case 'unknown':
|
||||
default:
|
||||
return undefined;
|
||||
}
|
||||
}
|
||||
|
||||
function readSourcesFromConfig(config: unknown): ContentGeneratorConfigSources {
|
||||
if (!config) {
|
||||
return {};
|
||||
}
|
||||
const maybe = config as {
|
||||
getContentGeneratorConfigSources?: () => ContentGeneratorConfigSources;
|
||||
};
|
||||
return maybe.getContentGeneratorConfigSources?.() ?? {};
|
||||
}
|
||||
|
||||
function maskApiKey(apiKey: string | undefined): string {
|
||||
if (!apiKey) return '(not set)';
|
||||
const trimmed = apiKey.trim();
|
||||
if (trimmed.length === 0) return '(not set)';
|
||||
if (trimmed.length <= 6) return '***';
|
||||
const head = trimmed.slice(0, 3);
|
||||
const tail = trimmed.slice(-4);
|
||||
return `${head}…${tail}`;
|
||||
}
|
||||
|
||||
function persistModelSelection(
|
||||
settings: ReturnType<typeof useSettings>,
|
||||
modelId: string,
|
||||
): void {
|
||||
const scope = getPersistScopeForModelSelection(settings);
|
||||
settings.setValue(scope, 'model.name', modelId);
|
||||
}
|
||||
|
||||
function persistAuthTypeSelection(
|
||||
settings: ReturnType<typeof useSettings>,
|
||||
authType: AuthType,
|
||||
): void {
|
||||
const scope = getPersistScopeForModelSelection(settings);
|
||||
settings.setValue(scope, 'security.auth.selectedType', authType);
|
||||
}
|
||||
|
||||
function ConfigRow({
|
||||
label,
|
||||
value,
|
||||
badge,
|
||||
}: {
|
||||
label: string;
|
||||
value: React.ReactNode;
|
||||
badge?: string;
|
||||
}): React.JSX.Element {
|
||||
return (
|
||||
<Box flexDirection="column">
|
||||
<Box>
|
||||
<Box minWidth={12} flexShrink={0}>
|
||||
<Text color={theme.text.secondary}>{label}:</Text>
|
||||
</Box>
|
||||
<Box flexGrow={1} flexDirection="row" flexWrap="wrap">
|
||||
<Text>{value}</Text>
|
||||
</Box>
|
||||
</Box>
|
||||
{badge ? (
|
||||
<Box>
|
||||
<Box minWidth={12} flexShrink={0}>
|
||||
<Text> </Text>
|
||||
</Box>
|
||||
<Box flexGrow={1}>
|
||||
<Text color={theme.text.secondary}>{badge}</Text>
|
||||
</Box>
|
||||
</Box>
|
||||
) : null}
|
||||
</Box>
|
||||
);
|
||||
}
|
||||
|
||||
export function ModelDialog({ onClose }: ModelDialogProps): React.JSX.Element {
|
||||
const config = useContext(ConfigContext);
|
||||
const uiState = useContext(UIStateContext);
|
||||
const settings = useSettings();
|
||||
|
||||
// Get auth type from config, default to QWEN_OAUTH if not available
|
||||
const authType = config?.getAuthType() ?? AuthType.QWEN_OAUTH;
|
||||
// Local error state for displaying errors within the dialog
|
||||
const [errorMessage, setErrorMessage] = useState<string | null>(null);
|
||||
|
||||
// Get available models based on auth type
|
||||
const availableModels = useMemo(
|
||||
() => getAvailableModelsForAuthType(authType),
|
||||
[authType],
|
||||
);
|
||||
const authType = config?.getAuthType();
|
||||
const effectiveConfig =
|
||||
(config?.getContentGeneratorConfig?.() as
|
||||
| ContentGeneratorConfig
|
||||
| undefined) ?? undefined;
|
||||
const sources = readSourcesFromConfig(config);
|
||||
|
||||
const availableModelEntries = useMemo(() => {
|
||||
const allAuthTypes = Object.values(AuthType) as AuthType[];
|
||||
const modelsByAuthType = allAuthTypes
|
||||
.map((t) => ({
|
||||
authType: t,
|
||||
models: getAvailableModelsForAuthType(t, config ?? undefined),
|
||||
}))
|
||||
.filter((x) => x.models.length > 0);
|
||||
|
||||
// Fixed order: qwen-oauth first, then others in a stable order
|
||||
const authTypeOrder: AuthType[] = [
|
||||
AuthType.QWEN_OAUTH,
|
||||
AuthType.USE_OPENAI,
|
||||
AuthType.USE_ANTHROPIC,
|
||||
AuthType.USE_GEMINI,
|
||||
AuthType.USE_VERTEX_AI,
|
||||
];
|
||||
|
||||
// Filter to only include authTypes that have models
|
||||
const availableAuthTypes = new Set(modelsByAuthType.map((x) => x.authType));
|
||||
const orderedAuthTypes = authTypeOrder.filter((t) =>
|
||||
availableAuthTypes.has(t),
|
||||
);
|
||||
|
||||
return orderedAuthTypes.flatMap((t) => {
|
||||
const models =
|
||||
modelsByAuthType.find((x) => x.authType === t)?.models ?? [];
|
||||
return models.map((m) => ({ authType: t, model: m }));
|
||||
});
|
||||
}, [config]);
|
||||
|
||||
const MODEL_OPTIONS = useMemo(
|
||||
() =>
|
||||
availableModels.map((model) => ({
|
||||
value: model.id,
|
||||
title: model.label,
|
||||
description: model.description || '',
|
||||
key: model.id,
|
||||
})),
|
||||
[availableModels],
|
||||
availableModelEntries.map(({ authType: t2, model }) => {
|
||||
const value = `${t2}::${model.id}`;
|
||||
const title = (
|
||||
<Text>
|
||||
<Text bold color={theme.text.accent}>
|
||||
[{t2}]
|
||||
</Text>
|
||||
<Text>{` ${model.label}`}</Text>
|
||||
</Text>
|
||||
);
|
||||
const description = model.description || '';
|
||||
return {
|
||||
value,
|
||||
title,
|
||||
description,
|
||||
key: value,
|
||||
};
|
||||
}),
|
||||
[availableModelEntries],
|
||||
);
|
||||
|
||||
// Determine the Preferred Model (read once when the dialog opens).
|
||||
const preferredModel = config?.getModel() || MAINLINE_CODER;
|
||||
const preferredModelId = config?.getModel() || MAINLINE_CODER;
|
||||
const preferredKey = authType ? `${authType}::${preferredModelId}` : '';
|
||||
|
||||
useKeypress(
|
||||
(key) => {
|
||||
@@ -61,25 +219,83 @@ export function ModelDialog({ onClose }: ModelDialogProps): React.JSX.Element {
|
||||
{ isActive: true },
|
||||
);
|
||||
|
||||
// Calculate the initial index based on the preferred model.
|
||||
const initialIndex = useMemo(
|
||||
() => MODEL_OPTIONS.findIndex((option) => option.value === preferredModel),
|
||||
[MODEL_OPTIONS, preferredModel],
|
||||
);
|
||||
const initialIndex = useMemo(() => {
|
||||
const index = MODEL_OPTIONS.findIndex(
|
||||
(option) => option.value === preferredKey,
|
||||
);
|
||||
return index === -1 ? 0 : index;
|
||||
}, [MODEL_OPTIONS, preferredKey]);
|
||||
|
||||
// Handle selection internally (Autonomous Dialog).
|
||||
const handleSelect = useCallback(
|
||||
(model: string) => {
|
||||
async (selected: string) => {
|
||||
// Clear any previous error
|
||||
setErrorMessage(null);
|
||||
|
||||
const sep = '::';
|
||||
const idx = selected.indexOf(sep);
|
||||
const selectedAuthType = (
|
||||
idx >= 0 ? selected.slice(0, idx) : authType
|
||||
) as AuthType;
|
||||
const modelId = idx >= 0 ? selected.slice(idx + sep.length) : selected;
|
||||
|
||||
if (config) {
|
||||
config.setModel(model);
|
||||
const event = new ModelSlashCommandEvent(model);
|
||||
try {
|
||||
await config.switchModel(
|
||||
selectedAuthType,
|
||||
modelId,
|
||||
selectedAuthType !== authType &&
|
||||
selectedAuthType === AuthType.QWEN_OAUTH
|
||||
? { requireCachedCredentials: true }
|
||||
: undefined,
|
||||
{
|
||||
reason: 'user_manual',
|
||||
context:
|
||||
selectedAuthType === authType
|
||||
? 'Model switched via /model dialog'
|
||||
: 'AuthType+model switched via /model dialog',
|
||||
},
|
||||
);
|
||||
} catch (e) {
|
||||
const baseErrorMessage = e instanceof Error ? e.message : String(e);
|
||||
setErrorMessage(
|
||||
`Failed to switch model to '${modelId}'.\n\n${baseErrorMessage}`,
|
||||
);
|
||||
return;
|
||||
}
|
||||
const event = new ModelSlashCommandEvent(modelId);
|
||||
logModelSlashCommand(config, event);
|
||||
|
||||
const after = config.getContentGeneratorConfig?.() as
|
||||
| ContentGeneratorConfig
|
||||
| undefined;
|
||||
const effectiveAuthType =
|
||||
after?.authType ?? selectedAuthType ?? authType;
|
||||
const effectiveModelId = after?.model ?? modelId;
|
||||
|
||||
persistModelSelection(settings, effectiveModelId);
|
||||
persistAuthTypeSelection(settings, effectiveAuthType);
|
||||
|
||||
const baseUrl = after?.baseUrl ?? '(default)';
|
||||
const maskedKey = maskApiKey(after?.apiKey);
|
||||
uiState?.historyManager.addItem(
|
||||
{
|
||||
type: 'info',
|
||||
text:
|
||||
`authType: ${effectiveAuthType}\n` +
|
||||
`Using model: ${effectiveModelId}\n` +
|
||||
`Base URL: ${baseUrl}\n` +
|
||||
`API key: ${maskedKey}`,
|
||||
},
|
||||
Date.now(),
|
||||
);
|
||||
}
|
||||
onClose();
|
||||
},
|
||||
[config, onClose],
|
||||
[authType, config, onClose, settings, uiState, setErrorMessage],
|
||||
);
|
||||
|
||||
const hasModels = MODEL_OPTIONS.length > 0;
|
||||
|
||||
return (
|
||||
<Box
|
||||
borderStyle="round"
|
||||
@@ -89,14 +305,73 @@ export function ModelDialog({ onClose }: ModelDialogProps): React.JSX.Element {
|
||||
width="100%"
|
||||
>
|
||||
<Text bold>{t('Select Model')}</Text>
|
||||
<Box marginTop={1}>
|
||||
<DescriptiveRadioButtonSelect
|
||||
items={MODEL_OPTIONS}
|
||||
onSelect={handleSelect}
|
||||
initialIndex={initialIndex}
|
||||
showNumbers={true}
|
||||
/>
|
||||
|
||||
<Box marginTop={1} flexDirection="column">
|
||||
<Text color={theme.text.secondary}>
|
||||
{t('Current (effective) configuration')}
|
||||
</Text>
|
||||
<Box flexDirection="column" marginTop={1}>
|
||||
<ConfigRow label="AuthType" value={authType} />
|
||||
<ConfigRow
|
||||
label="Model"
|
||||
value={effectiveConfig?.model ?? config?.getModel?.() ?? ''}
|
||||
badge={formatSourceBadge(sources['model'])}
|
||||
/>
|
||||
|
||||
{authType !== AuthType.QWEN_OAUTH && (
|
||||
<>
|
||||
<ConfigRow
|
||||
label="Base URL"
|
||||
value={effectiveConfig?.baseUrl ?? ''}
|
||||
badge={formatSourceBadge(sources['baseUrl'])}
|
||||
/>
|
||||
<ConfigRow
|
||||
label="API Key"
|
||||
value={effectiveConfig?.apiKey ? t('(set)') : t('(not set)')}
|
||||
badge={formatSourceBadge(sources['apiKey'])}
|
||||
/>
|
||||
</>
|
||||
)}
|
||||
</Box>
|
||||
</Box>
|
||||
|
||||
{!hasModels ? (
|
||||
<Box marginTop={1} flexDirection="column">
|
||||
<Text color={theme.status.warning}>
|
||||
{t(
|
||||
'No models available for the current authentication type ({{authType}}).',
|
||||
{
|
||||
authType: authType ? String(authType) : t('(none)'),
|
||||
},
|
||||
)}
|
||||
</Text>
|
||||
<Box marginTop={1}>
|
||||
<Text color={theme.text.secondary}>
|
||||
{t(
|
||||
'Please configure models in settings.modelProviders or use environment variables.',
|
||||
)}
|
||||
</Text>
|
||||
</Box>
|
||||
</Box>
|
||||
) : (
|
||||
<Box marginTop={1}>
|
||||
<DescriptiveRadioButtonSelect
|
||||
items={MODEL_OPTIONS}
|
||||
onSelect={handleSelect}
|
||||
initialIndex={initialIndex}
|
||||
showNumbers={true}
|
||||
/>
|
||||
</Box>
|
||||
)}
|
||||
|
||||
{errorMessage && (
|
||||
<Box marginTop={1} flexDirection="column" paddingX={1}>
|
||||
<Text color={theme.status.error} wrap="wrap">
|
||||
✕ {errorMessage}
|
||||
</Text>
|
||||
</Box>
|
||||
)}
|
||||
|
||||
<Box marginTop={1} flexDirection="column">
|
||||
<Text color={theme.text.secondary}>{t('(Press Esc to close)')}</Text>
|
||||
</Box>
|
||||
|
||||
@@ -11,7 +11,7 @@ import { BaseSelectionList } from './BaseSelectionList.js';
|
||||
import type { SelectionListItem } from '../../hooks/useSelectionList.js';
|
||||
|
||||
export interface DescriptiveRadioSelectItem<T> extends SelectionListItem<T> {
|
||||
title: string;
|
||||
title: React.ReactNode;
|
||||
description: string;
|
||||
}
|
||||
|
||||
|
||||
@@ -30,7 +30,6 @@ export interface UIActions {
|
||||
) => void;
|
||||
handleAuthSelect: (
|
||||
authType: AuthType | undefined,
|
||||
scope: SettingScope,
|
||||
credentials?: OpenAICredentials,
|
||||
) => Promise<void>;
|
||||
setAuthState: (state: AuthState) => void;
|
||||
|
||||
@@ -25,7 +25,6 @@ export interface DialogCloseOptions {
|
||||
isAuthDialogOpen: boolean;
|
||||
handleAuthSelect: (
|
||||
authType: AuthType | undefined,
|
||||
scope: SettingScope,
|
||||
credentials?: OpenAICredentials,
|
||||
) => Promise<void>;
|
||||
pendingAuthType: AuthType | undefined;
|
||||
|
||||
@@ -912,7 +912,7 @@ export const useGeminiStream = (
|
||||
// Reset quota error flag when starting a new query (not a continuation)
|
||||
if (!options?.isContinuation) {
|
||||
setModelSwitchedFromQuotaError(false);
|
||||
config.setQuotaErrorOccurred(false);
|
||||
// No quota-error / fallback routing mechanism currently; keep state minimal.
|
||||
}
|
||||
|
||||
abortControllerRef.current = new AbortController();
|
||||
|
||||
@@ -1,21 +1,58 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { useCallback } from 'react';
|
||||
import { useStdin } from 'ink';
|
||||
import type { EditorType } from '@qwen-code/qwen-code-core';
|
||||
import {
|
||||
editorCommands,
|
||||
commandExists as coreCommandExists,
|
||||
} from '@qwen-code/qwen-code-core';
|
||||
import { spawnSync } from 'child_process';
|
||||
import { useSettings } from '../contexts/SettingsContext.js';
|
||||
|
||||
/**
|
||||
* Cache for command existence checks to avoid repeated execSync calls.
|
||||
*/
|
||||
const commandExistsCache = new Map<string, boolean>();
|
||||
|
||||
/**
|
||||
* Check if a command exists in the system with caching.
|
||||
* Results are cached to improve performance in test environments.
|
||||
*/
|
||||
function commandExists(cmd: string): boolean {
|
||||
if (commandExistsCache.has(cmd)) {
|
||||
return commandExistsCache.get(cmd)!;
|
||||
}
|
||||
|
||||
const exists = coreCommandExists(cmd);
|
||||
commandExistsCache.set(cmd, exists);
|
||||
return exists;
|
||||
}
|
||||
/**
|
||||
* Get the actual executable command for an editor type.
|
||||
*/
|
||||
function getExecutableCommand(editorType: EditorType): string {
|
||||
const commandConfig = editorCommands[editorType];
|
||||
const commands =
|
||||
process.platform === 'win32' ? commandConfig.win32 : commandConfig.default;
|
||||
|
||||
const availableCommand = commands.find((cmd) => commandExists(cmd));
|
||||
|
||||
if (!availableCommand) {
|
||||
throw new Error(
|
||||
`No available editor command found for ${editorType}. ` +
|
||||
`Tried: ${commands.join(', ')}. ` +
|
||||
`Please install one of these editors or set a different preferredEditor in settings.`,
|
||||
);
|
||||
}
|
||||
|
||||
return availableCommand;
|
||||
}
|
||||
|
||||
/**
|
||||
* Determines the editor command to use based on user preferences and platform.
|
||||
*/
|
||||
function getEditorCommand(preferredEditor?: EditorType): string {
|
||||
if (preferredEditor) {
|
||||
return preferredEditor;
|
||||
return getExecutableCommand(preferredEditor);
|
||||
}
|
||||
|
||||
// Platform-specific defaults with UI preference for macOS
|
||||
@@ -63,8 +100,14 @@ export function useLaunchEditor() {
|
||||
try {
|
||||
setRawMode?.(false);
|
||||
|
||||
// On Windows, .cmd and .bat files need shell: true
|
||||
const needsShell =
|
||||
process.platform === 'win32' &&
|
||||
(editorCommand.endsWith('.cmd') || editorCommand.endsWith('.bat'));
|
||||
|
||||
const { status, error } = spawnSync(editorCommand, editorArgs, {
|
||||
stdio: 'inherit',
|
||||
shell: needsShell,
|
||||
});
|
||||
|
||||
if (error) throw error;
|
||||
|
||||
@@ -62,7 +62,7 @@ const mockConfig = {
|
||||
getAllowedTools: vi.fn(() => []),
|
||||
getContentGeneratorConfig: () => ({
|
||||
model: 'test-model',
|
||||
authType: 'gemini-api-key',
|
||||
authType: 'gemini',
|
||||
}),
|
||||
getUseSmartEdit: () => false,
|
||||
getUseModelRouter: () => false,
|
||||
|
||||
205
packages/cli/src/ui/models/availableModels.test.ts
Normal file
205
packages/cli/src/ui/models/availableModels.test.ts
Normal file
@@ -0,0 +1,205 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
||||
import {
|
||||
getAvailableModelsForAuthType,
|
||||
getFilteredQwenModels,
|
||||
getOpenAIAvailableModelFromEnv,
|
||||
isVisionModel,
|
||||
getDefaultVisionModel,
|
||||
AVAILABLE_MODELS_QWEN,
|
||||
MAINLINE_VLM,
|
||||
MAINLINE_CODER,
|
||||
} from './availableModels.js';
|
||||
import { AuthType, type Config } from '@qwen-code/qwen-code-core';
|
||||
|
||||
describe('availableModels', () => {
|
||||
describe('AVAILABLE_MODELS_QWEN', () => {
|
||||
it('should include coder model', () => {
|
||||
const coderModel = AVAILABLE_MODELS_QWEN.find(
|
||||
(m) => m.id === MAINLINE_CODER,
|
||||
);
|
||||
expect(coderModel).toBeDefined();
|
||||
expect(coderModel?.isVision).toBeFalsy();
|
||||
});
|
||||
|
||||
it('should include vision model', () => {
|
||||
const visionModel = AVAILABLE_MODELS_QWEN.find(
|
||||
(m) => m.id === MAINLINE_VLM,
|
||||
);
|
||||
expect(visionModel).toBeDefined();
|
||||
expect(visionModel?.isVision).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('getFilteredQwenModels', () => {
|
||||
it('should return all models when vision preview is enabled', () => {
|
||||
const models = getFilteredQwenModels(true);
|
||||
expect(models.length).toBe(AVAILABLE_MODELS_QWEN.length);
|
||||
});
|
||||
|
||||
it('should filter out vision models when preview is disabled', () => {
|
||||
const models = getFilteredQwenModels(false);
|
||||
expect(models.every((m) => !m.isVision)).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('getOpenAIAvailableModelFromEnv', () => {
|
||||
const originalEnv = process.env;
|
||||
|
||||
beforeEach(() => {
|
||||
process.env = { ...originalEnv };
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
process.env = originalEnv;
|
||||
});
|
||||
|
||||
it('should return null when OPENAI_MODEL is not set', () => {
|
||||
delete process.env['OPENAI_MODEL'];
|
||||
expect(getOpenAIAvailableModelFromEnv()).toBeNull();
|
||||
});
|
||||
|
||||
it('should return model from OPENAI_MODEL env var', () => {
|
||||
process.env['OPENAI_MODEL'] = 'gpt-4-turbo';
|
||||
const model = getOpenAIAvailableModelFromEnv();
|
||||
expect(model?.id).toBe('gpt-4-turbo');
|
||||
expect(model?.label).toBe('gpt-4-turbo');
|
||||
});
|
||||
|
||||
it('should trim whitespace from env var', () => {
|
||||
process.env['OPENAI_MODEL'] = ' gpt-4 ';
|
||||
const model = getOpenAIAvailableModelFromEnv();
|
||||
expect(model?.id).toBe('gpt-4');
|
||||
});
|
||||
});
|
||||
|
||||
describe('getAvailableModelsForAuthType', () => {
|
||||
const originalEnv = process.env;
|
||||
|
||||
beforeEach(() => {
|
||||
process.env = { ...originalEnv };
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
process.env = originalEnv;
|
||||
});
|
||||
|
||||
it('should return hard-coded qwen models for qwen-oauth', () => {
|
||||
const models = getAvailableModelsForAuthType(AuthType.QWEN_OAUTH);
|
||||
expect(models).toEqual(AVAILABLE_MODELS_QWEN);
|
||||
});
|
||||
|
||||
it('should return hard-coded qwen models even when config is provided', () => {
|
||||
const mockConfig = {
|
||||
getAvailableModels: vi
|
||||
.fn()
|
||||
.mockReturnValue([
|
||||
{ id: 'custom', label: 'Custom', authType: AuthType.QWEN_OAUTH },
|
||||
]),
|
||||
} as unknown as Config;
|
||||
|
||||
const models = getAvailableModelsForAuthType(
|
||||
AuthType.QWEN_OAUTH,
|
||||
mockConfig,
|
||||
);
|
||||
expect(models).toEqual(AVAILABLE_MODELS_QWEN);
|
||||
});
|
||||
|
||||
it('should use config.getAvailableModels for openai authType when available', () => {
|
||||
const mockModels = [
|
||||
{
|
||||
id: 'gpt-4',
|
||||
label: 'GPT-4',
|
||||
description: 'Test',
|
||||
authType: AuthType.USE_OPENAI,
|
||||
isVision: false,
|
||||
},
|
||||
];
|
||||
const getAvailableModelsForAuthType = vi.fn().mockReturnValue(mockModels);
|
||||
const mockConfigWithMethod = {
|
||||
// Prefer the newer API when available.
|
||||
getAvailableModelsForAuthType,
|
||||
};
|
||||
|
||||
const models = getAvailableModelsForAuthType(
|
||||
AuthType.USE_OPENAI,
|
||||
mockConfigWithMethod as unknown as Config,
|
||||
);
|
||||
|
||||
expect(getAvailableModelsForAuthType).toHaveBeenCalled();
|
||||
expect(models[0].id).toBe('gpt-4');
|
||||
});
|
||||
|
||||
it('should fallback to env var for openai when config returns empty', () => {
|
||||
process.env['OPENAI_MODEL'] = 'fallback-model';
|
||||
const mockConfig = {
|
||||
getAvailableModelsForAuthType: vi.fn().mockReturnValue([]),
|
||||
} as unknown as Config;
|
||||
|
||||
const models = getAvailableModelsForAuthType(
|
||||
AuthType.USE_OPENAI,
|
||||
mockConfig,
|
||||
);
|
||||
|
||||
expect(models).toEqual([]);
|
||||
});
|
||||
|
||||
it('should fallback to env var for openai when config throws', () => {
|
||||
process.env['OPENAI_MODEL'] = 'fallback-model';
|
||||
const mockConfig = {
|
||||
getAvailableModelsForAuthType: vi.fn().mockImplementation(() => {
|
||||
throw new Error('Registry not initialized');
|
||||
}),
|
||||
} as unknown as Config;
|
||||
|
||||
const models = getAvailableModelsForAuthType(
|
||||
AuthType.USE_OPENAI,
|
||||
mockConfig,
|
||||
);
|
||||
|
||||
expect(models).toEqual([]);
|
||||
});
|
||||
|
||||
it('should return env model for openai without config', () => {
|
||||
process.env['OPENAI_MODEL'] = 'gpt-4-turbo';
|
||||
const models = getAvailableModelsForAuthType(AuthType.USE_OPENAI);
|
||||
expect(models[0].id).toBe('gpt-4-turbo');
|
||||
});
|
||||
|
||||
it('should return empty array for openai without config or env', () => {
|
||||
delete process.env['OPENAI_MODEL'];
|
||||
const models = getAvailableModelsForAuthType(AuthType.USE_OPENAI);
|
||||
expect(models).toEqual([]);
|
||||
});
|
||||
|
||||
it('should return empty array for other auth types', () => {
|
||||
const models = getAvailableModelsForAuthType(AuthType.USE_GEMINI);
|
||||
expect(models).toEqual([]);
|
||||
});
|
||||
});
|
||||
|
||||
describe('isVisionModel', () => {
|
||||
it('should return true for vision model', () => {
|
||||
expect(isVisionModel(MAINLINE_VLM)).toBe(true);
|
||||
});
|
||||
|
||||
it('should return false for non-vision model', () => {
|
||||
expect(isVisionModel(MAINLINE_CODER)).toBe(false);
|
||||
});
|
||||
|
||||
it('should return false for unknown model', () => {
|
||||
expect(isVisionModel('unknown-model')).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('getDefaultVisionModel', () => {
|
||||
it('should return the vision model ID', () => {
|
||||
expect(getDefaultVisionModel()).toBe(MAINLINE_VLM);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -4,7 +4,12 @@
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { AuthType, DEFAULT_QWEN_MODEL } from '@qwen-code/qwen-code-core';
|
||||
import {
|
||||
AuthType,
|
||||
DEFAULT_QWEN_MODEL,
|
||||
type Config,
|
||||
type AvailableModel as CoreAvailableModel,
|
||||
} from '@qwen-code/qwen-code-core';
|
||||
import { t } from '../../i18n/index.js';
|
||||
|
||||
export type AvailableModel = {
|
||||
@@ -57,20 +62,78 @@ export function getFilteredQwenModels(
|
||||
*/
|
||||
export function getOpenAIAvailableModelFromEnv(): AvailableModel | null {
|
||||
const id = process.env['OPENAI_MODEL']?.trim();
|
||||
return id ? { id, label: id } : null;
|
||||
return id
|
||||
? {
|
||||
id,
|
||||
label: id,
|
||||
get description() {
|
||||
return t('Configured via OPENAI_MODEL environment variable');
|
||||
},
|
||||
}
|
||||
: null;
|
||||
}
|
||||
|
||||
export function getAnthropicAvailableModelFromEnv(): AvailableModel | null {
|
||||
const id = process.env['ANTHROPIC_MODEL']?.trim();
|
||||
return id ? { id, label: id } : null;
|
||||
return id
|
||||
? {
|
||||
id,
|
||||
label: id,
|
||||
get description() {
|
||||
return t('Configured via ANTHROPIC_MODEL environment variable');
|
||||
},
|
||||
}
|
||||
: null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert core AvailableModel to CLI AvailableModel format
|
||||
*/
|
||||
function convertCoreModelToCliModel(
|
||||
coreModel: CoreAvailableModel,
|
||||
): AvailableModel {
|
||||
return {
|
||||
id: coreModel.id,
|
||||
label: coreModel.label,
|
||||
description: coreModel.description,
|
||||
isVision: coreModel.isVision ?? coreModel.capabilities?.vision ?? false,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get available models for the given authType.
|
||||
*
|
||||
* If a Config object is provided, uses config.getAvailableModelsForAuthType().
|
||||
* For qwen-oauth, always returns the hard-coded models.
|
||||
* Falls back to environment variables only when no config is provided.
|
||||
*/
|
||||
export function getAvailableModelsForAuthType(
|
||||
authType: AuthType,
|
||||
config?: Config,
|
||||
): AvailableModel[] {
|
||||
// For qwen-oauth, always use hard-coded models, this aligns with the API gateway.
|
||||
if (authType === AuthType.QWEN_OAUTH) {
|
||||
return AVAILABLE_MODELS_QWEN;
|
||||
}
|
||||
|
||||
// Use config's model registry when available
|
||||
if (config) {
|
||||
try {
|
||||
const models = config.getAvailableModelsForAuthType(authType);
|
||||
if (models.length > 0) {
|
||||
return models.map(convertCoreModelToCliModel);
|
||||
}
|
||||
} catch {
|
||||
// If config throws (e.g., not initialized), return empty array
|
||||
}
|
||||
// When a Config object is provided, we intentionally do NOT fall back to env-based
|
||||
// "raw" models. These may reflect the currently effective config but should not be
|
||||
// presented as selectable options in /model.
|
||||
return [];
|
||||
}
|
||||
|
||||
// Fall back to environment variables for specific auth types (no config provided)
|
||||
switch (authType) {
|
||||
case AuthType.QWEN_OAUTH:
|
||||
return AVAILABLE_MODELS_QWEN;
|
||||
case AuthType.USE_OPENAI: {
|
||||
const openAIModel = getOpenAIAvailableModelFromEnv();
|
||||
return openAIModel ? [openAIModel] : [];
|
||||
@@ -80,13 +143,10 @@ export function getAvailableModelsForAuthType(
|
||||
return anthropicModel ? [anthropicModel] : [];
|
||||
}
|
||||
default:
|
||||
// For other auth types, return empty array for now
|
||||
// This can be expanded later according to the design doc
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
/**
|
||||
* Hard code the default vision model as a string literal,
|
||||
* until our coding model supports multimodal.
|
||||
|
||||
133
packages/cli/src/utils/modelConfigUtils.ts
Normal file
133
packages/cli/src/utils/modelConfigUtils.ts
Normal file
@@ -0,0 +1,133 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import {
|
||||
AuthType,
|
||||
type ContentGeneratorConfig,
|
||||
type ContentGeneratorConfigSources,
|
||||
resolveModelConfig,
|
||||
type ModelConfigSourcesInput,
|
||||
} from '@qwen-code/qwen-code-core';
|
||||
import type { Settings } from '../config/settings.js';
|
||||
|
||||
export interface CliGenerationConfigInputs {
|
||||
argv: {
|
||||
model?: string | undefined;
|
||||
openaiApiKey?: string | undefined;
|
||||
openaiBaseUrl?: string | undefined;
|
||||
openaiLogging?: boolean | undefined;
|
||||
openaiLoggingDir?: string | undefined;
|
||||
};
|
||||
settings: Settings;
|
||||
selectedAuthType: AuthType | undefined;
|
||||
/**
|
||||
* Injectable env for testability. Defaults to process.env at callsites.
|
||||
*/
|
||||
env?: Record<string, string | undefined>;
|
||||
}
|
||||
|
||||
export interface ResolvedCliGenerationConfig {
|
||||
/** The resolved model id (may be empty string if not resolvable at CLI layer) */
|
||||
model: string;
|
||||
/** API key for OpenAI-compatible auth */
|
||||
apiKey: string;
|
||||
/** Base URL for OpenAI-compatible auth */
|
||||
baseUrl: string;
|
||||
/** The full generation config to pass to core Config */
|
||||
generationConfig: Partial<ContentGeneratorConfig>;
|
||||
/** Source attribution for each resolved field */
|
||||
sources: ContentGeneratorConfigSources;
|
||||
}
|
||||
|
||||
export function getAuthTypeFromEnv(): AuthType | undefined {
|
||||
if (process.env['OPENAI_API_KEY']) {
|
||||
return AuthType.USE_OPENAI;
|
||||
}
|
||||
if (process.env['QWEN_OAUTH']) {
|
||||
return AuthType.QWEN_OAUTH;
|
||||
}
|
||||
|
||||
if (process.env['GEMINI_API_KEY']) {
|
||||
return AuthType.USE_GEMINI;
|
||||
}
|
||||
if (process.env['GOOGLE_API_KEY']) {
|
||||
return AuthType.USE_VERTEX_AI;
|
||||
}
|
||||
if (process.env['ANTHROPIC_API_KEY']) {
|
||||
return AuthType.USE_ANTHROPIC;
|
||||
}
|
||||
|
||||
return undefined;
|
||||
}
|
||||
|
||||
/**
|
||||
* Unified resolver for CLI generation config.
|
||||
*
|
||||
* Precedence (for OpenAI auth):
|
||||
* - model: argv.model > OPENAI_MODEL > QWEN_MODEL > settings.model.name
|
||||
* - apiKey: argv.openaiApiKey > OPENAI_API_KEY > settings.security.auth.apiKey
|
||||
* - baseUrl: argv.openaiBaseUrl > OPENAI_BASE_URL > settings.security.auth.baseUrl
|
||||
*
|
||||
* For non-OpenAI auth, only argv.model override is respected at CLI layer.
|
||||
*/
|
||||
export function resolveCliGenerationConfig(
|
||||
inputs: CliGenerationConfigInputs,
|
||||
): ResolvedCliGenerationConfig {
|
||||
const { argv, settings, selectedAuthType } = inputs;
|
||||
const env = inputs.env ?? (process.env as Record<string, string | undefined>);
|
||||
|
||||
const authType = selectedAuthType;
|
||||
|
||||
const configSources: ModelConfigSourcesInput = {
|
||||
authType,
|
||||
cli: {
|
||||
model: argv.model,
|
||||
apiKey: argv.openaiApiKey,
|
||||
baseUrl: argv.openaiBaseUrl,
|
||||
},
|
||||
settings: {
|
||||
model: settings.model?.name,
|
||||
apiKey: settings.security?.auth?.apiKey,
|
||||
baseUrl: settings.security?.auth?.baseUrl,
|
||||
generationConfig: settings.model?.generationConfig as
|
||||
| Partial<ContentGeneratorConfig>
|
||||
| undefined,
|
||||
},
|
||||
env,
|
||||
};
|
||||
|
||||
const resolved = resolveModelConfig(configSources);
|
||||
|
||||
// Log warnings if any
|
||||
for (const warning of resolved.warnings) {
|
||||
console.warn(`[modelProviderUtils] ${warning}`);
|
||||
}
|
||||
|
||||
// Resolve OpenAI logging config (CLI-specific, not part of core resolver)
|
||||
const enableOpenAILogging =
|
||||
(typeof argv.openaiLogging === 'undefined'
|
||||
? settings.model?.enableOpenAILogging
|
||||
: argv.openaiLogging) ?? false;
|
||||
|
||||
const openAILoggingDir =
|
||||
argv.openaiLoggingDir || settings.model?.openAILoggingDir;
|
||||
|
||||
// Build the full generation config
|
||||
// Note: we merge the resolved config with logging settings
|
||||
const generationConfig: Partial<ContentGeneratorConfig> = {
|
||||
...resolved.config,
|
||||
enableOpenAILogging,
|
||||
openAILoggingDir,
|
||||
};
|
||||
|
||||
return {
|
||||
model: resolved.config.model || '',
|
||||
apiKey: resolved.config.apiKey || '',
|
||||
baseUrl: resolved.config.baseUrl || '',
|
||||
generationConfig,
|
||||
sources: resolved.sources,
|
||||
};
|
||||
}
|
||||
@@ -57,6 +57,7 @@ describe('systemInfo', () => {
|
||||
getModel: vi.fn().mockReturnValue('test-model'),
|
||||
getIdeMode: vi.fn().mockReturnValue(true),
|
||||
getSessionId: vi.fn().mockReturnValue('test-session-id'),
|
||||
getAuthType: vi.fn().mockReturnValue('test-auth'),
|
||||
getContentGeneratorConfig: vi.fn().mockReturnValue({
|
||||
baseUrl: 'https://api.openai.com',
|
||||
}),
|
||||
@@ -273,6 +274,9 @@ describe('systemInfo', () => {
|
||||
// Update the mock context to use OpenAI auth
|
||||
mockContext.services.settings.merged.security!.auth!.selectedType =
|
||||
AuthType.USE_OPENAI;
|
||||
vi.mocked(mockContext.services.config!.getAuthType).mockReturnValue(
|
||||
AuthType.USE_OPENAI,
|
||||
);
|
||||
|
||||
const extendedInfo = await getExtendedSystemInfo(mockContext);
|
||||
|
||||
|
||||
@@ -115,8 +115,7 @@ export async function getSystemInfo(
|
||||
const sandboxEnv = getSandboxEnv();
|
||||
const modelVersion = context.services.config?.getModel() || 'Unknown';
|
||||
const cliVersion = await getCliVersion();
|
||||
const selectedAuthType =
|
||||
context.services.settings.merged.security?.auth?.selectedType || '';
|
||||
const selectedAuthType = context.services.config?.getAuthType() || '';
|
||||
const ideClient = await getIdeClientName(context);
|
||||
const sessionId = context.services.config?.getSessionId() || 'unknown';
|
||||
|
||||
|
||||
@@ -14,6 +14,20 @@ import * as JsonOutputAdapterModule from './nonInteractive/io/JsonOutputAdapter.
|
||||
import * as StreamJsonOutputAdapterModule from './nonInteractive/io/StreamJsonOutputAdapter.js';
|
||||
import * as cleanupModule from './utils/cleanup.js';
|
||||
|
||||
// Helper to create a mock Config with modelsConfig
|
||||
function createMockConfig(overrides?: Partial<Config>): Config {
|
||||
return {
|
||||
refreshAuth: vi.fn().mockResolvedValue('refreshed'),
|
||||
getOutputFormat: vi.fn().mockReturnValue(OutputFormat.TEXT),
|
||||
getContentGeneratorConfig: vi.fn().mockReturnValue({ authType: undefined }),
|
||||
modelsConfig: {
|
||||
getModel: vi.fn().mockReturnValue('default-model'),
|
||||
getCurrentAuthType: vi.fn().mockReturnValue(AuthType.QWEN_OAUTH),
|
||||
},
|
||||
...overrides,
|
||||
} as unknown as Config;
|
||||
}
|
||||
|
||||
describe('validateNonInterActiveAuth', () => {
|
||||
let originalEnvGeminiApiKey: string | undefined;
|
||||
let originalEnvVertexAi: string | undefined;
|
||||
@@ -107,17 +121,20 @@ describe('validateNonInterActiveAuth', () => {
|
||||
vi.restoreAllMocks();
|
||||
});
|
||||
|
||||
it('exits if no auth type is configured or env vars set', async () => {
|
||||
const nonInteractiveConfig = {
|
||||
it('exits if validateAuthMethod fails for default auth type', async () => {
|
||||
// Mock validateAuthMethod to return error (e.g., missing API key)
|
||||
vi.spyOn(auth, 'validateAuthMethod').mockReturnValue(
|
||||
'Missing API key for authentication',
|
||||
);
|
||||
const nonInteractiveConfig = createMockConfig({
|
||||
refreshAuth: refreshAuthMock,
|
||||
getOutputFormat: vi.fn().mockReturnValue(OutputFormat.TEXT),
|
||||
getContentGeneratorConfig: vi
|
||||
.fn()
|
||||
.mockReturnValue({ authType: undefined }),
|
||||
} as unknown as Config;
|
||||
modelsConfig: {
|
||||
getModel: vi.fn().mockReturnValue('default-model'),
|
||||
getCurrentAuthType: vi.fn().mockReturnValue(AuthType.QWEN_OAUTH),
|
||||
},
|
||||
});
|
||||
try {
|
||||
await validateNonInteractiveAuth(
|
||||
undefined,
|
||||
undefined,
|
||||
nonInteractiveConfig,
|
||||
mockSettings,
|
||||
@@ -127,22 +144,21 @@ describe('validateNonInterActiveAuth', () => {
|
||||
expect((e as Error).message).toContain('process.exit(1) called');
|
||||
}
|
||||
expect(consoleErrorSpy).toHaveBeenCalledWith(
|
||||
expect.stringContaining('Please set an Auth method'),
|
||||
expect.stringContaining('Missing API key'),
|
||||
);
|
||||
expect(processExitSpy).toHaveBeenCalledWith(1);
|
||||
});
|
||||
|
||||
it('uses USE_OPENAI if OPENAI_API_KEY is set', async () => {
|
||||
process.env['OPENAI_API_KEY'] = 'fake-openai-key';
|
||||
const nonInteractiveConfig = {
|
||||
const nonInteractiveConfig = createMockConfig({
|
||||
refreshAuth: refreshAuthMock,
|
||||
getOutputFormat: vi.fn().mockReturnValue(OutputFormat.TEXT),
|
||||
getContentGeneratorConfig: vi
|
||||
.fn()
|
||||
.mockReturnValue({ authType: undefined }),
|
||||
} as unknown as Config;
|
||||
modelsConfig: {
|
||||
getModel: vi.fn().mockReturnValue('default-model'),
|
||||
getCurrentAuthType: vi.fn().mockReturnValue(AuthType.USE_OPENAI),
|
||||
},
|
||||
});
|
||||
await validateNonInteractiveAuth(
|
||||
undefined,
|
||||
undefined,
|
||||
nonInteractiveConfig,
|
||||
mockSettings,
|
||||
@@ -151,15 +167,14 @@ describe('validateNonInterActiveAuth', () => {
|
||||
});
|
||||
|
||||
it('uses configured QWEN_OAUTH if provided', async () => {
|
||||
const nonInteractiveConfig = {
|
||||
const nonInteractiveConfig = createMockConfig({
|
||||
refreshAuth: refreshAuthMock,
|
||||
getOutputFormat: vi.fn().mockReturnValue(OutputFormat.TEXT),
|
||||
getContentGeneratorConfig: vi
|
||||
.fn()
|
||||
.mockReturnValue({ authType: undefined }),
|
||||
} as unknown as Config;
|
||||
modelsConfig: {
|
||||
getModel: vi.fn().mockReturnValue('default-model'),
|
||||
getCurrentAuthType: vi.fn().mockReturnValue(AuthType.QWEN_OAUTH),
|
||||
},
|
||||
});
|
||||
await validateNonInteractiveAuth(
|
||||
AuthType.QWEN_OAUTH,
|
||||
undefined,
|
||||
nonInteractiveConfig,
|
||||
mockSettings,
|
||||
@@ -170,16 +185,11 @@ describe('validateNonInterActiveAuth', () => {
|
||||
it('exits if validateAuthMethod returns error', async () => {
|
||||
// Mock validateAuthMethod to return error
|
||||
vi.spyOn(auth, 'validateAuthMethod').mockReturnValue('Auth error!');
|
||||
const nonInteractiveConfig = {
|
||||
const nonInteractiveConfig = createMockConfig({
|
||||
refreshAuth: refreshAuthMock,
|
||||
getOutputFormat: vi.fn().mockReturnValue(OutputFormat.TEXT),
|
||||
getContentGeneratorConfig: vi
|
||||
.fn()
|
||||
.mockReturnValue({ authType: undefined }),
|
||||
} as unknown as Config;
|
||||
});
|
||||
try {
|
||||
await validateNonInteractiveAuth(
|
||||
AuthType.USE_GEMINI,
|
||||
undefined,
|
||||
nonInteractiveConfig,
|
||||
mockSettings,
|
||||
@@ -197,14 +207,13 @@ describe('validateNonInterActiveAuth', () => {
|
||||
const validateAuthMethodSpy = vi
|
||||
.spyOn(auth, 'validateAuthMethod')
|
||||
.mockReturnValue('Auth error!');
|
||||
const nonInteractiveConfig = {
|
||||
const nonInteractiveConfig = createMockConfig({
|
||||
refreshAuth: refreshAuthMock,
|
||||
} as unknown as Config;
|
||||
});
|
||||
|
||||
// Even with an invalid auth type, it should not exit
|
||||
// because validation is skipped.
|
||||
// Even with validation errors, it should not exit
|
||||
// because validation is skipped when useExternalAuth is true.
|
||||
await validateNonInteractiveAuth(
|
||||
'invalid-auth-type' as AuthType,
|
||||
true, // useExternalAuth = true
|
||||
nonInteractiveConfig,
|
||||
mockSettings,
|
||||
@@ -213,8 +222,8 @@ describe('validateNonInterActiveAuth', () => {
|
||||
expect(validateAuthMethodSpy).not.toHaveBeenCalled();
|
||||
expect(consoleErrorSpy).not.toHaveBeenCalled();
|
||||
expect(processExitSpy).not.toHaveBeenCalled();
|
||||
// We still expect refreshAuth to be called with the (invalid) type
|
||||
expect(refreshAuthMock).toHaveBeenCalledWith('invalid-auth-type');
|
||||
// refreshAuth is called with the authType from config.modelsConfig.getCurrentAuthType()
|
||||
expect(refreshAuthMock).toHaveBeenCalledWith(AuthType.QWEN_OAUTH);
|
||||
});
|
||||
|
||||
it('uses enforcedAuthType if provided', async () => {
|
||||
@@ -222,11 +231,14 @@ describe('validateNonInterActiveAuth', () => {
|
||||
mockSettings.merged.security!.auth!.selectedType = AuthType.USE_OPENAI;
|
||||
// Set required env var for USE_OPENAI to ensure enforcedAuthType takes precedence
|
||||
process.env['OPENAI_API_KEY'] = 'fake-key';
|
||||
const nonInteractiveConfig = {
|
||||
const nonInteractiveConfig = createMockConfig({
|
||||
refreshAuth: refreshAuthMock,
|
||||
} as unknown as Config;
|
||||
modelsConfig: {
|
||||
getModel: vi.fn().mockReturnValue('default-model'),
|
||||
getCurrentAuthType: vi.fn().mockReturnValue(AuthType.USE_OPENAI),
|
||||
},
|
||||
});
|
||||
await validateNonInteractiveAuth(
|
||||
AuthType.USE_OPENAI,
|
||||
undefined,
|
||||
nonInteractiveConfig,
|
||||
mockSettings,
|
||||
@@ -237,16 +249,15 @@ describe('validateNonInterActiveAuth', () => {
|
||||
it('exits if currentAuthType does not match enforcedAuthType', async () => {
|
||||
mockSettings.merged.security!.auth!.enforcedType = AuthType.QWEN_OAUTH;
|
||||
process.env['OPENAI_API_KEY'] = 'fake-key';
|
||||
const nonInteractiveConfig = {
|
||||
const nonInteractiveConfig = createMockConfig({
|
||||
refreshAuth: refreshAuthMock,
|
||||
getOutputFormat: vi.fn().mockReturnValue(OutputFormat.TEXT),
|
||||
getContentGeneratorConfig: vi
|
||||
.fn()
|
||||
.mockReturnValue({ authType: undefined }),
|
||||
} as unknown as Config;
|
||||
modelsConfig: {
|
||||
getModel: vi.fn().mockReturnValue('default-model'),
|
||||
getCurrentAuthType: vi.fn().mockReturnValue(AuthType.USE_OPENAI),
|
||||
},
|
||||
});
|
||||
try {
|
||||
await validateNonInteractiveAuth(
|
||||
AuthType.USE_OPENAI,
|
||||
undefined,
|
||||
nonInteractiveConfig,
|
||||
mockSettings,
|
||||
@@ -279,18 +290,21 @@ describe('validateNonInterActiveAuth', () => {
|
||||
);
|
||||
});
|
||||
|
||||
it('emits error result and exits when no auth is configured', async () => {
|
||||
const nonInteractiveConfig = {
|
||||
it('emits error result and exits when validateAuthMethod fails', async () => {
|
||||
vi.spyOn(auth, 'validateAuthMethod').mockReturnValue(
|
||||
'Missing API key for authentication',
|
||||
);
|
||||
const nonInteractiveConfig = createMockConfig({
|
||||
refreshAuth: refreshAuthMock,
|
||||
getOutputFormat: vi.fn().mockReturnValue(OutputFormat.JSON),
|
||||
getContentGeneratorConfig: vi
|
||||
.fn()
|
||||
.mockReturnValue({ authType: undefined }),
|
||||
} as unknown as Config;
|
||||
modelsConfig: {
|
||||
getModel: vi.fn().mockReturnValue('default-model'),
|
||||
getCurrentAuthType: vi.fn().mockReturnValue(AuthType.QWEN_OAUTH),
|
||||
},
|
||||
});
|
||||
|
||||
try {
|
||||
await validateNonInteractiveAuth(
|
||||
undefined,
|
||||
undefined,
|
||||
nonInteractiveConfig,
|
||||
mockSettings,
|
||||
@@ -302,9 +316,7 @@ describe('validateNonInterActiveAuth', () => {
|
||||
|
||||
expect(emitResultMock).toHaveBeenCalledWith({
|
||||
isError: true,
|
||||
errorMessage: expect.stringContaining(
|
||||
'Please set an Auth method in your',
|
||||
),
|
||||
errorMessage: expect.stringContaining('Missing API key'),
|
||||
durationMs: 0,
|
||||
apiDurationMs: 0,
|
||||
numTurns: 0,
|
||||
@@ -319,17 +331,17 @@ describe('validateNonInterActiveAuth', () => {
|
||||
mockSettings.merged.security!.auth!.enforcedType = AuthType.QWEN_OAUTH;
|
||||
process.env['OPENAI_API_KEY'] = 'fake-key';
|
||||
|
||||
const nonInteractiveConfig = {
|
||||
const nonInteractiveConfig = createMockConfig({
|
||||
refreshAuth: refreshAuthMock,
|
||||
getOutputFormat: vi.fn().mockReturnValue(OutputFormat.JSON),
|
||||
getContentGeneratorConfig: vi
|
||||
.fn()
|
||||
.mockReturnValue({ authType: undefined }),
|
||||
} as unknown as Config;
|
||||
modelsConfig: {
|
||||
getModel: vi.fn().mockReturnValue('default-model'),
|
||||
getCurrentAuthType: vi.fn().mockReturnValue(AuthType.USE_OPENAI),
|
||||
},
|
||||
});
|
||||
|
||||
try {
|
||||
await validateNonInteractiveAuth(
|
||||
undefined,
|
||||
undefined,
|
||||
nonInteractiveConfig,
|
||||
mockSettings,
|
||||
@@ -354,21 +366,21 @@ describe('validateNonInterActiveAuth', () => {
|
||||
expect(consoleErrorSpy).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('emits error result and exits when validateAuthMethod fails', async () => {
|
||||
it('emits error result and exits when API key validation fails', async () => {
|
||||
vi.spyOn(auth, 'validateAuthMethod').mockReturnValue('Auth error!');
|
||||
process.env['OPENAI_API_KEY'] = 'fake-key';
|
||||
|
||||
const nonInteractiveConfig = {
|
||||
const nonInteractiveConfig = createMockConfig({
|
||||
refreshAuth: refreshAuthMock,
|
||||
getOutputFormat: vi.fn().mockReturnValue(OutputFormat.JSON),
|
||||
getContentGeneratorConfig: vi
|
||||
.fn()
|
||||
.mockReturnValue({ authType: undefined }),
|
||||
} as unknown as Config;
|
||||
modelsConfig: {
|
||||
getModel: vi.fn().mockReturnValue('default-model'),
|
||||
getCurrentAuthType: vi.fn().mockReturnValue(AuthType.USE_OPENAI),
|
||||
},
|
||||
});
|
||||
|
||||
try {
|
||||
await validateNonInteractiveAuth(
|
||||
AuthType.USE_OPENAI,
|
||||
undefined,
|
||||
nonInteractiveConfig,
|
||||
mockSettings,
|
||||
@@ -413,19 +425,22 @@ describe('validateNonInterActiveAuth', () => {
|
||||
);
|
||||
});
|
||||
|
||||
it('emits error result and exits when no auth is configured', async () => {
|
||||
const nonInteractiveConfig = {
|
||||
it('emits error result and exits when validateAuthMethod fails', async () => {
|
||||
vi.spyOn(auth, 'validateAuthMethod').mockReturnValue(
|
||||
'Missing API key for authentication',
|
||||
);
|
||||
const nonInteractiveConfig = createMockConfig({
|
||||
refreshAuth: refreshAuthMock,
|
||||
getOutputFormat: vi.fn().mockReturnValue(OutputFormat.STREAM_JSON),
|
||||
getIncludePartialMessages: vi.fn().mockReturnValue(false),
|
||||
getContentGeneratorConfig: vi
|
||||
.fn()
|
||||
.mockReturnValue({ authType: undefined }),
|
||||
} as unknown as Config;
|
||||
modelsConfig: {
|
||||
getModel: vi.fn().mockReturnValue('default-model'),
|
||||
getCurrentAuthType: vi.fn().mockReturnValue(AuthType.QWEN_OAUTH),
|
||||
},
|
||||
});
|
||||
|
||||
try {
|
||||
await validateNonInteractiveAuth(
|
||||
undefined,
|
||||
undefined,
|
||||
nonInteractiveConfig,
|
||||
mockSettings,
|
||||
@@ -437,9 +452,7 @@ describe('validateNonInterActiveAuth', () => {
|
||||
|
||||
expect(emitResultMock).toHaveBeenCalledWith({
|
||||
isError: true,
|
||||
errorMessage: expect.stringContaining(
|
||||
'Please set an Auth method in your',
|
||||
),
|
||||
errorMessage: expect.stringContaining('Missing API key'),
|
||||
durationMs: 0,
|
||||
apiDurationMs: 0,
|
||||
numTurns: 0,
|
||||
@@ -454,18 +467,18 @@ describe('validateNonInterActiveAuth', () => {
|
||||
mockSettings.merged.security!.auth!.enforcedType = AuthType.QWEN_OAUTH;
|
||||
process.env['OPENAI_API_KEY'] = 'fake-key';
|
||||
|
||||
const nonInteractiveConfig = {
|
||||
const nonInteractiveConfig = createMockConfig({
|
||||
refreshAuth: refreshAuthMock,
|
||||
getOutputFormat: vi.fn().mockReturnValue(OutputFormat.STREAM_JSON),
|
||||
getIncludePartialMessages: vi.fn().mockReturnValue(false),
|
||||
getContentGeneratorConfig: vi
|
||||
.fn()
|
||||
.mockReturnValue({ authType: undefined }),
|
||||
} as unknown as Config;
|
||||
modelsConfig: {
|
||||
getModel: vi.fn().mockReturnValue('default-model'),
|
||||
getCurrentAuthType: vi.fn().mockReturnValue(AuthType.USE_OPENAI),
|
||||
},
|
||||
});
|
||||
|
||||
try {
|
||||
await validateNonInteractiveAuth(
|
||||
undefined,
|
||||
undefined,
|
||||
nonInteractiveConfig,
|
||||
mockSettings,
|
||||
@@ -490,22 +503,22 @@ describe('validateNonInterActiveAuth', () => {
|
||||
expect(consoleErrorSpy).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('emits error result and exits when validateAuthMethod fails', async () => {
|
||||
it('emits error result and exits when API key validation fails', async () => {
|
||||
vi.spyOn(auth, 'validateAuthMethod').mockReturnValue('Auth error!');
|
||||
process.env['OPENAI_API_KEY'] = 'fake-key';
|
||||
|
||||
const nonInteractiveConfig = {
|
||||
const nonInteractiveConfig = createMockConfig({
|
||||
refreshAuth: refreshAuthMock,
|
||||
getOutputFormat: vi.fn().mockReturnValue(OutputFormat.STREAM_JSON),
|
||||
getIncludePartialMessages: vi.fn().mockReturnValue(false),
|
||||
getContentGeneratorConfig: vi
|
||||
.fn()
|
||||
.mockReturnValue({ authType: undefined }),
|
||||
} as unknown as Config;
|
||||
modelsConfig: {
|
||||
getModel: vi.fn().mockReturnValue('default-model'),
|
||||
getCurrentAuthType: vi.fn().mockReturnValue(AuthType.USE_OPENAI),
|
||||
},
|
||||
});
|
||||
|
||||
try {
|
||||
await validateNonInteractiveAuth(
|
||||
AuthType.USE_OPENAI,
|
||||
undefined,
|
||||
nonInteractiveConfig,
|
||||
mockSettings,
|
||||
|
||||
@@ -5,69 +5,42 @@
|
||||
*/
|
||||
|
||||
import type { Config } from '@qwen-code/qwen-code-core';
|
||||
import { AuthType, OutputFormat } from '@qwen-code/qwen-code-core';
|
||||
import { USER_SETTINGS_PATH } from './config/settings.js';
|
||||
import { OutputFormat } from '@qwen-code/qwen-code-core';
|
||||
import { validateAuthMethod } from './config/auth.js';
|
||||
import { type LoadedSettings } from './config/settings.js';
|
||||
import { JsonOutputAdapter } from './nonInteractive/io/JsonOutputAdapter.js';
|
||||
import { StreamJsonOutputAdapter } from './nonInteractive/io/StreamJsonOutputAdapter.js';
|
||||
import { runExitCleanup } from './utils/cleanup.js';
|
||||
|
||||
function getAuthTypeFromEnv(): AuthType | undefined {
|
||||
if (process.env['OPENAI_API_KEY']) {
|
||||
return AuthType.USE_OPENAI;
|
||||
}
|
||||
if (process.env['QWEN_OAUTH']) {
|
||||
return AuthType.QWEN_OAUTH;
|
||||
}
|
||||
|
||||
if (process.env['GEMINI_API_KEY']) {
|
||||
return AuthType.USE_GEMINI;
|
||||
}
|
||||
if (process.env['GOOGLE_API_KEY']) {
|
||||
return AuthType.USE_VERTEX_AI;
|
||||
}
|
||||
if (process.env['ANTHROPIC_API_KEY']) {
|
||||
return AuthType.USE_ANTHROPIC;
|
||||
}
|
||||
|
||||
return undefined;
|
||||
}
|
||||
|
||||
export async function validateNonInteractiveAuth(
|
||||
configuredAuthType: AuthType | undefined,
|
||||
useExternalAuth: boolean | undefined,
|
||||
nonInteractiveConfig: Config,
|
||||
settings: LoadedSettings,
|
||||
): Promise<Config> {
|
||||
try {
|
||||
const enforcedType = settings.merged.security?.auth?.enforcedType;
|
||||
if (enforcedType) {
|
||||
const currentAuthType = getAuthTypeFromEnv();
|
||||
if (currentAuthType !== enforcedType) {
|
||||
const message = `The configured auth type is ${enforcedType}, but the current auth type is ${currentAuthType}. Please re-authenticate with the correct type.`;
|
||||
throw new Error(message);
|
||||
}
|
||||
// Get the actual authType from config which has already resolved CLI args, env vars, and settings
|
||||
const authType = nonInteractiveConfig.modelsConfig.getCurrentAuthType();
|
||||
if (!authType) {
|
||||
throw new Error(
|
||||
'No auth type is selected. Please configure an auth type (e.g. via settings or `--auth-type`) before running in non-interactive mode.',
|
||||
);
|
||||
}
|
||||
const resolvedAuthType: NonNullable<typeof authType> = authType;
|
||||
|
||||
const effectiveAuthType =
|
||||
enforcedType || configuredAuthType || getAuthTypeFromEnv();
|
||||
|
||||
if (!effectiveAuthType) {
|
||||
const message = `Please set an Auth method in your ${USER_SETTINGS_PATH} or specify one of the following environment variables before running: QWEN_OAUTH, OPENAI_API_KEY`;
|
||||
const enforcedType = settings.merged.security?.auth?.enforcedType;
|
||||
if (enforcedType && enforcedType !== resolvedAuthType) {
|
||||
const message = `The configured auth type is ${enforcedType}, but the current auth type is ${resolvedAuthType}. Please re-authenticate with the correct type.`;
|
||||
throw new Error(message);
|
||||
}
|
||||
|
||||
const authType: AuthType = effectiveAuthType as AuthType;
|
||||
|
||||
if (!useExternalAuth) {
|
||||
const err = validateAuthMethod(String(authType));
|
||||
const err = validateAuthMethod(resolvedAuthType, nonInteractiveConfig);
|
||||
if (err != null) {
|
||||
throw new Error(err);
|
||||
}
|
||||
}
|
||||
|
||||
await nonInteractiveConfig.refreshAuth(authType);
|
||||
await nonInteractiveConfig.refreshAuth(resolvedAuthType);
|
||||
return nonInteractiveConfig;
|
||||
} catch (error) {
|
||||
const outputFormat = nonInteractiveConfig.getOutputFormat();
|
||||
|
||||
@@ -8,12 +8,8 @@ export * from './src/index.js';
|
||||
export { Storage } from './src/config/storage.js';
|
||||
export {
|
||||
DEFAULT_QWEN_MODEL,
|
||||
DEFAULT_QWEN_FLASH_MODEL,
|
||||
DEFAULT_QWEN_EMBEDDING_MODEL,
|
||||
DEFAULT_GEMINI_MODEL,
|
||||
DEFAULT_GEMINI_MODEL_AUTO,
|
||||
DEFAULT_GEMINI_FLASH_MODEL,
|
||||
DEFAULT_GEMINI_FLASH_LITE_MODEL,
|
||||
DEFAULT_GEMINI_EMBEDDING_MODEL,
|
||||
} from './src/config/models.js';
|
||||
export {
|
||||
serializeTerminalToObject,
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@qwen-code/qwen-code-core",
|
||||
"version": "0.6.1-nightly.20260108.570ec432",
|
||||
"version": "0.7.0",
|
||||
"description": "Qwen Code Core",
|
||||
"repository": {
|
||||
"type": "git",
|
||||
|
||||
@@ -15,10 +15,16 @@ import {
|
||||
DEFAULT_OTLP_ENDPOINT,
|
||||
QwenLogger,
|
||||
} from '../telemetry/index.js';
|
||||
import type { ContentGeneratorConfig } from '../core/contentGenerator.js';
|
||||
import type {
|
||||
ContentGenerator,
|
||||
ContentGeneratorConfig,
|
||||
} from '../core/contentGenerator.js';
|
||||
import { DEFAULT_DASHSCOPE_BASE_URL } from '../core/openaiContentGenerator/constants.js';
|
||||
import {
|
||||
AuthType,
|
||||
createContentGenerator,
|
||||
createContentGeneratorConfig,
|
||||
resolveContentGeneratorConfigWithSources,
|
||||
} from '../core/contentGenerator.js';
|
||||
import { GeminiClient } from '../core/client.js';
|
||||
import { GitService } from '../services/gitService.js';
|
||||
@@ -208,6 +214,19 @@ describe('Server Config (config.ts)', () => {
|
||||
vi.spyOn(QwenLogger.prototype, 'logStartSessionEvent').mockImplementation(
|
||||
async () => undefined,
|
||||
);
|
||||
|
||||
// Setup default mock for resolveContentGeneratorConfigWithSources
|
||||
vi.mocked(resolveContentGeneratorConfigWithSources).mockImplementation(
|
||||
(_config, authType, generationConfig) => ({
|
||||
config: {
|
||||
...generationConfig,
|
||||
authType,
|
||||
model: generationConfig?.model || MODEL,
|
||||
apiKey: 'test-key',
|
||||
} as ContentGeneratorConfig,
|
||||
sources: {},
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
describe('initialize', () => {
|
||||
@@ -255,31 +274,28 @@ describe('Server Config (config.ts)', () => {
|
||||
const mockContentConfig = {
|
||||
apiKey: 'test-key',
|
||||
model: 'qwen3-coder-plus',
|
||||
authType,
|
||||
};
|
||||
|
||||
vi.mocked(createContentGeneratorConfig).mockReturnValue(
|
||||
mockContentConfig,
|
||||
);
|
||||
|
||||
// Set fallback mode to true to ensure it gets reset
|
||||
config.setFallbackMode(true);
|
||||
expect(config.isInFallbackMode()).toBe(true);
|
||||
vi.mocked(resolveContentGeneratorConfigWithSources).mockReturnValue({
|
||||
config: mockContentConfig as ContentGeneratorConfig,
|
||||
sources: {},
|
||||
});
|
||||
|
||||
await config.refreshAuth(authType);
|
||||
|
||||
expect(createContentGeneratorConfig).toHaveBeenCalledWith(
|
||||
expect(resolveContentGeneratorConfigWithSources).toHaveBeenCalledWith(
|
||||
config,
|
||||
authType,
|
||||
{
|
||||
expect.objectContaining({
|
||||
model: MODEL,
|
||||
baseUrl: undefined,
|
||||
},
|
||||
}),
|
||||
expect.anything(),
|
||||
expect.anything(),
|
||||
);
|
||||
// Verify that contentGeneratorConfig is updated
|
||||
expect(config.getContentGeneratorConfig()).toEqual(mockContentConfig);
|
||||
expect(GeminiClient).toHaveBeenCalledWith(config);
|
||||
// Verify that fallback mode is reset
|
||||
expect(config.isInFallbackMode()).toBe(false);
|
||||
});
|
||||
|
||||
it('should not strip thoughts when switching from Vertex to GenAI', async () => {
|
||||
@@ -300,6 +316,129 @@ describe('Server Config (config.ts)', () => {
|
||||
});
|
||||
});
|
||||
|
||||
describe('model switching optimization (QWEN_OAUTH)', () => {
|
||||
it('should switch qwen-oauth model in-place without refreshing auth when safe', async () => {
|
||||
const config = new Config(baseParams);
|
||||
|
||||
const mockContentConfig: ContentGeneratorConfig = {
|
||||
authType: AuthType.QWEN_OAUTH,
|
||||
model: 'coder-model',
|
||||
apiKey: 'QWEN_OAUTH_DYNAMIC_TOKEN',
|
||||
baseUrl: DEFAULT_DASHSCOPE_BASE_URL,
|
||||
timeout: 60000,
|
||||
maxRetries: 3,
|
||||
} as ContentGeneratorConfig;
|
||||
|
||||
vi.mocked(resolveContentGeneratorConfigWithSources).mockImplementation(
|
||||
(_config, authType, generationConfig) => ({
|
||||
config: {
|
||||
...mockContentConfig,
|
||||
authType,
|
||||
model: generationConfig?.model ?? mockContentConfig.model,
|
||||
} as ContentGeneratorConfig,
|
||||
sources: {},
|
||||
}),
|
||||
);
|
||||
vi.mocked(createContentGenerator).mockResolvedValue({
|
||||
generateContent: vi.fn(),
|
||||
generateContentStream: vi.fn(),
|
||||
countTokens: vi.fn(),
|
||||
embedContent: vi.fn(),
|
||||
} as unknown as ContentGenerator);
|
||||
|
||||
// Establish initial qwen-oauth content generator config/content generator.
|
||||
await config.refreshAuth(AuthType.QWEN_OAUTH);
|
||||
|
||||
// Spy after initial refresh to ensure model switch does not re-trigger refreshAuth.
|
||||
const refreshSpy = vi.spyOn(config, 'refreshAuth');
|
||||
|
||||
await config.switchModel(AuthType.QWEN_OAUTH, 'vision-model');
|
||||
|
||||
expect(config.getModel()).toBe('vision-model');
|
||||
expect(refreshSpy).not.toHaveBeenCalled();
|
||||
// Called once during initial refreshAuth + once during handleModelChange diffing.
|
||||
expect(
|
||||
vi.mocked(resolveContentGeneratorConfigWithSources),
|
||||
).toHaveBeenCalledTimes(2);
|
||||
expect(vi.mocked(createContentGenerator)).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
});
|
||||
|
||||
describe('model switching with different credentials (OpenAI)', () => {
|
||||
it('should refresh auth when switching to model with different envKey', async () => {
|
||||
// This test verifies the fix for switching between modelProvider models
|
||||
// with different envKeys (e.g., deepseek-chat with DEEPSEEK_API_KEY)
|
||||
const configWithModelProviders = new Config({
|
||||
...baseParams,
|
||||
authType: AuthType.USE_OPENAI,
|
||||
modelProvidersConfig: {
|
||||
openai: [
|
||||
{
|
||||
id: 'model-a',
|
||||
name: 'Model A',
|
||||
baseUrl: 'https://api.example.com/v1',
|
||||
envKey: 'API_KEY_A',
|
||||
},
|
||||
{
|
||||
id: 'model-b',
|
||||
name: 'Model B',
|
||||
baseUrl: 'https://api.example.com/v1',
|
||||
envKey: 'API_KEY_B',
|
||||
},
|
||||
],
|
||||
},
|
||||
});
|
||||
|
||||
const mockContentConfigA: ContentGeneratorConfig = {
|
||||
authType: AuthType.USE_OPENAI,
|
||||
model: 'model-a',
|
||||
apiKey: 'key-a',
|
||||
baseUrl: 'https://api.example.com/v1',
|
||||
} as ContentGeneratorConfig;
|
||||
|
||||
const mockContentConfigB: ContentGeneratorConfig = {
|
||||
authType: AuthType.USE_OPENAI,
|
||||
model: 'model-b',
|
||||
apiKey: 'key-b',
|
||||
baseUrl: 'https://api.example.com/v1',
|
||||
} as ContentGeneratorConfig;
|
||||
|
||||
vi.mocked(resolveContentGeneratorConfigWithSources).mockImplementation(
|
||||
(_config, _authType, generationConfig) => {
|
||||
const model = generationConfig?.model;
|
||||
return {
|
||||
config:
|
||||
model === 'model-b' ? mockContentConfigB : mockContentConfigA,
|
||||
sources: {},
|
||||
};
|
||||
},
|
||||
);
|
||||
|
||||
vi.mocked(createContentGenerator).mockResolvedValue({
|
||||
generateContent: vi.fn(),
|
||||
generateContentStream: vi.fn(),
|
||||
countTokens: vi.fn(),
|
||||
embedContent: vi.fn(),
|
||||
} as unknown as ContentGenerator);
|
||||
|
||||
// Initialize with model-a
|
||||
await configWithModelProviders.refreshAuth(AuthType.USE_OPENAI);
|
||||
|
||||
// Spy on refreshAuth to verify it's called when switching to model-b
|
||||
const refreshSpy = vi.spyOn(configWithModelProviders, 'refreshAuth');
|
||||
|
||||
// Switch to model-b (different envKey)
|
||||
await configWithModelProviders.switchModel(
|
||||
AuthType.USE_OPENAI,
|
||||
'model-b',
|
||||
);
|
||||
|
||||
// Should trigger full refresh because envKey changed
|
||||
expect(refreshSpy).toHaveBeenCalledWith(AuthType.USE_OPENAI);
|
||||
expect(configWithModelProviders.getModel()).toBe('model-b');
|
||||
});
|
||||
});
|
||||
|
||||
it('Config constructor should store userMemory correctly', () => {
|
||||
const config = new Config(baseParams);
|
||||
|
||||
|
||||
@@ -16,9 +16,8 @@ import { ProxyAgent, setGlobalDispatcher } from 'undici';
|
||||
import type {
|
||||
ContentGenerator,
|
||||
ContentGeneratorConfig,
|
||||
AuthType,
|
||||
} from '../core/contentGenerator.js';
|
||||
import type { FallbackModelHandler } from '../fallback/types.js';
|
||||
import type { ContentGeneratorConfigSources } from '../core/contentGenerator.js';
|
||||
import type { MCPOAuthConfig } from '../mcp/oauth-provider.js';
|
||||
import type { ShellExecutionConfig } from '../services/shellExecutionService.js';
|
||||
import type { AnyToolInvocation } from '../tools/tools.js';
|
||||
@@ -27,8 +26,9 @@ import type { AnyToolInvocation } from '../tools/tools.js';
|
||||
import { BaseLlmClient } from '../core/baseLlmClient.js';
|
||||
import { GeminiClient } from '../core/client.js';
|
||||
import {
|
||||
AuthType,
|
||||
createContentGenerator,
|
||||
createContentGeneratorConfig,
|
||||
resolveContentGeneratorConfigWithSources,
|
||||
} from '../core/contentGenerator.js';
|
||||
import { tokenLimit } from '../core/tokenLimits.js';
|
||||
|
||||
@@ -94,7 +94,7 @@ import {
|
||||
DEFAULT_FILE_FILTERING_OPTIONS,
|
||||
DEFAULT_MEMORY_FILE_FILTERING_OPTIONS,
|
||||
} from './constants.js';
|
||||
import { DEFAULT_QWEN_EMBEDDING_MODEL, DEFAULT_QWEN_MODEL } from './models.js';
|
||||
import { DEFAULT_QWEN_EMBEDDING_MODEL } from './models.js';
|
||||
import { Storage } from './storage.js';
|
||||
import { ChatRecordingService } from '../services/chatRecordingService.js';
|
||||
import {
|
||||
@@ -103,6 +103,12 @@ import {
|
||||
} from '../services/sessionService.js';
|
||||
import { randomUUID } from 'node:crypto';
|
||||
|
||||
import {
|
||||
ModelsConfig,
|
||||
type ModelProvidersConfig,
|
||||
type AvailableModel,
|
||||
} from '../models/index.js';
|
||||
|
||||
// Re-export types
|
||||
export type { AnyToolInvocation, FileFilteringOptions, MCPOAuthConfig };
|
||||
export {
|
||||
@@ -318,6 +324,11 @@ export interface ConfigParameters {
|
||||
ideMode?: boolean;
|
||||
authType?: AuthType;
|
||||
generationConfig?: Partial<ContentGeneratorConfig>;
|
||||
/**
|
||||
* Optional source map for generationConfig fields (e.g. CLI/env/settings attribution).
|
||||
* This is used to produce per-field source badges in the UI.
|
||||
*/
|
||||
generationConfigSources?: ContentGeneratorConfigSources;
|
||||
cliVersion?: string;
|
||||
loadMemoryFromIncludeDirectories?: boolean;
|
||||
chatRecording?: boolean;
|
||||
@@ -353,6 +364,8 @@ export interface ConfigParameters {
|
||||
sdkMode?: boolean;
|
||||
sessionSubagents?: SubagentConfig[];
|
||||
channel?: string;
|
||||
/** Model providers configuration grouped by authType */
|
||||
modelProvidersConfig?: ModelProvidersConfig;
|
||||
}
|
||||
|
||||
function normalizeConfigOutputFormat(
|
||||
@@ -394,9 +407,12 @@ export class Config {
|
||||
private skillManager!: SkillManager;
|
||||
private fileSystemService: FileSystemService;
|
||||
private contentGeneratorConfig!: ContentGeneratorConfig;
|
||||
private contentGeneratorConfigSources: ContentGeneratorConfigSources = {};
|
||||
private contentGenerator!: ContentGenerator;
|
||||
private _generationConfig: Partial<ContentGeneratorConfig>;
|
||||
private readonly embeddingModel: string;
|
||||
|
||||
private _modelsConfig!: ModelsConfig;
|
||||
private readonly modelProvidersConfig?: ModelProvidersConfig;
|
||||
private readonly sandbox: SandboxConfig | undefined;
|
||||
private readonly targetDir: string;
|
||||
private workspaceContext: WorkspaceContext;
|
||||
@@ -445,7 +461,6 @@ export class Config {
|
||||
private readonly folderTrust: boolean;
|
||||
private ideMode: boolean;
|
||||
|
||||
private inFallbackMode = false;
|
||||
private readonly maxSessionTurns: number;
|
||||
private readonly sessionTokenLimit: number;
|
||||
private readonly listExtensions: boolean;
|
||||
@@ -454,8 +469,6 @@ export class Config {
|
||||
name: string;
|
||||
extensionName: string;
|
||||
}>;
|
||||
fallbackModelHandler?: FallbackModelHandler;
|
||||
private quotaErrorOccurred: boolean = false;
|
||||
private readonly summarizeToolOutput:
|
||||
| Record<string, SummarizeToolOutputSettings>
|
||||
| undefined;
|
||||
@@ -570,13 +583,7 @@ export class Config {
|
||||
this.folderTrustFeature = params.folderTrustFeature ?? false;
|
||||
this.folderTrust = params.folderTrust ?? false;
|
||||
this.ideMode = params.ideMode ?? false;
|
||||
this._generationConfig = {
|
||||
model: params.model,
|
||||
...(params.generationConfig || {}),
|
||||
baseUrl: params.generationConfig?.baseUrl,
|
||||
};
|
||||
this.contentGeneratorConfig = this
|
||||
._generationConfig as ContentGeneratorConfig;
|
||||
this.modelProvidersConfig = params.modelProvidersConfig;
|
||||
this.cliVersion = params.cliVersion;
|
||||
|
||||
this.chatRecordingEnabled = params.chatRecording ?? true;
|
||||
@@ -619,6 +626,22 @@ export class Config {
|
||||
setGeminiMdFilename(params.contextFileName);
|
||||
}
|
||||
|
||||
// Create ModelsConfig for centralized model management
|
||||
// Prefer params.authType over generationConfig.authType because:
|
||||
// - params.authType preserves undefined (user hasn't selected yet)
|
||||
// - generationConfig.authType may have a default value from resolvers
|
||||
this._modelsConfig = new ModelsConfig({
|
||||
initialAuthType: params.authType ?? params.generationConfig?.authType,
|
||||
modelProvidersConfig: this.modelProvidersConfig,
|
||||
generationConfig: {
|
||||
model: params.model,
|
||||
...(params.generationConfig || {}),
|
||||
baseUrl: params.generationConfig?.baseUrl,
|
||||
},
|
||||
generationConfigSources: params.generationConfigSources,
|
||||
onModelChange: this.handleModelChange.bind(this),
|
||||
});
|
||||
|
||||
if (this.telemetrySettings.enabled) {
|
||||
initializeTelemetry(this);
|
||||
}
|
||||
@@ -669,45 +692,61 @@ export class Config {
|
||||
return this.contentGenerator;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the ModelsConfig instance for model-related operations.
|
||||
* External code (e.g., CLI) can use this to access model configuration.
|
||||
*/
|
||||
get modelsConfig(): ModelsConfig {
|
||||
return this._modelsConfig;
|
||||
}
|
||||
|
||||
/**
|
||||
* Updates the credentials in the generation config.
|
||||
* This is needed when credentials are set after Config construction.
|
||||
* Exclusive for `OpenAIKeyPrompt` to update credentials via `/auth`
|
||||
* Delegates to ModelsConfig.
|
||||
*/
|
||||
updateCredentials(credentials: {
|
||||
apiKey?: string;
|
||||
baseUrl?: string;
|
||||
model?: string;
|
||||
}): void {
|
||||
if (credentials.apiKey) {
|
||||
this._generationConfig.apiKey = credentials.apiKey;
|
||||
}
|
||||
if (credentials.baseUrl) {
|
||||
this._generationConfig.baseUrl = credentials.baseUrl;
|
||||
}
|
||||
if (credentials.model) {
|
||||
this._generationConfig.model = credentials.model;
|
||||
}
|
||||
this._modelsConfig.updateCredentials(credentials);
|
||||
}
|
||||
|
||||
/**
|
||||
* Refresh authentication and rebuild ContentGenerator.
|
||||
*/
|
||||
async refreshAuth(authMethod: AuthType, isInitialAuth?: boolean) {
|
||||
const newContentGeneratorConfig = createContentGeneratorConfig(
|
||||
// Sync modelsConfig state for this auth refresh
|
||||
const modelId = this._modelsConfig.getModel();
|
||||
this._modelsConfig.syncAfterAuthRefresh(authMethod, modelId);
|
||||
|
||||
// Check and consume cached credentials flag
|
||||
const requireCached =
|
||||
this._modelsConfig.consumeRequireCachedCredentialsFlag();
|
||||
|
||||
const { config, sources } = resolveContentGeneratorConfigWithSources(
|
||||
this,
|
||||
authMethod,
|
||||
this._generationConfig,
|
||||
this._modelsConfig.getGenerationConfig(),
|
||||
this._modelsConfig.getGenerationConfigSources(),
|
||||
{
|
||||
strictModelProvider:
|
||||
this._modelsConfig.isStrictModelProviderSelection(),
|
||||
},
|
||||
);
|
||||
const newContentGeneratorConfig = config;
|
||||
this.contentGenerator = await createContentGenerator(
|
||||
newContentGeneratorConfig,
|
||||
this,
|
||||
isInitialAuth,
|
||||
requireCached ? true : isInitialAuth,
|
||||
);
|
||||
// Only assign to instance properties after successful initialization
|
||||
this.contentGeneratorConfig = newContentGeneratorConfig;
|
||||
this.contentGeneratorConfigSources = sources;
|
||||
|
||||
// Initialize BaseLlmClient now that the ContentGenerator is available
|
||||
this.baseLlmClient = new BaseLlmClient(this.contentGenerator, this);
|
||||
|
||||
// Reset the session flag since we're explicitly changing auth and using default model
|
||||
this.inFallbackMode = false;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -767,31 +806,125 @@ export class Config {
|
||||
return this.contentGeneratorConfig;
|
||||
}
|
||||
|
||||
getModel(): string {
|
||||
return this.contentGeneratorConfig?.model || DEFAULT_QWEN_MODEL;
|
||||
getContentGeneratorConfigSources(): ContentGeneratorConfigSources {
|
||||
// If contentGeneratorConfigSources is empty (before initializeAuth),
|
||||
// get sources from ModelsConfig
|
||||
if (
|
||||
Object.keys(this.contentGeneratorConfigSources).length === 0 &&
|
||||
this._modelsConfig
|
||||
) {
|
||||
return this._modelsConfig.getGenerationConfigSources();
|
||||
}
|
||||
return this.contentGeneratorConfigSources;
|
||||
}
|
||||
|
||||
getModel(): string {
|
||||
return this.contentGeneratorConfig?.model || this._modelsConfig.getModel();
|
||||
}
|
||||
|
||||
/**
|
||||
* Set model programmatically (e.g., VLM auto-switch, fallback).
|
||||
* Delegates to ModelsConfig.
|
||||
*/
|
||||
async setModel(
|
||||
newModel: string,
|
||||
_metadata?: { reason?: string; context?: string },
|
||||
metadata?: { reason?: string; context?: string },
|
||||
): Promise<void> {
|
||||
await this._modelsConfig.setModel(newModel, metadata);
|
||||
// Also update contentGeneratorConfig for hot-update compatibility
|
||||
if (this.contentGeneratorConfig) {
|
||||
this.contentGeneratorConfig.model = newModel;
|
||||
}
|
||||
// TODO: Log _metadata for telemetry if needed
|
||||
// This _metadata can be used for tracking model switches (reason, context)
|
||||
}
|
||||
|
||||
isInFallbackMode(): boolean {
|
||||
return this.inFallbackMode;
|
||||
/**
|
||||
* Handle model change from ModelsConfig.
|
||||
* This updates the content generator config with the new model settings.
|
||||
*/
|
||||
private async handleModelChange(
|
||||
authType: AuthType,
|
||||
requiresRefresh: boolean,
|
||||
): Promise<void> {
|
||||
if (!this.contentGeneratorConfig) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Hot update path: only supported for qwen-oauth.
|
||||
// For other auth types we always refresh to recreate the ContentGenerator.
|
||||
//
|
||||
// Rationale:
|
||||
// - Non-qwen providers may need to re-validate credentials / baseUrl / envKey.
|
||||
// - ModelsConfig.applyResolvedModelDefaults can clear or change credentials sources.
|
||||
// - Refresh keeps runtime behavior consistent and centralized.
|
||||
if (authType === AuthType.QWEN_OAUTH && !requiresRefresh) {
|
||||
const { config, sources } = resolveContentGeneratorConfigWithSources(
|
||||
this,
|
||||
authType,
|
||||
this._modelsConfig.getGenerationConfig(),
|
||||
this._modelsConfig.getGenerationConfigSources(),
|
||||
{
|
||||
strictModelProvider:
|
||||
this._modelsConfig.isStrictModelProviderSelection(),
|
||||
},
|
||||
);
|
||||
|
||||
// Hot-update fields (qwen-oauth models share the same auth + client).
|
||||
this.contentGeneratorConfig.model = config.model;
|
||||
this.contentGeneratorConfig.samplingParams = config.samplingParams;
|
||||
this.contentGeneratorConfig.disableCacheControl =
|
||||
config.disableCacheControl;
|
||||
|
||||
if ('model' in sources) {
|
||||
this.contentGeneratorConfigSources['model'] = sources['model'];
|
||||
}
|
||||
if ('samplingParams' in sources) {
|
||||
this.contentGeneratorConfigSources['samplingParams'] =
|
||||
sources['samplingParams'];
|
||||
}
|
||||
if ('disableCacheControl' in sources) {
|
||||
this.contentGeneratorConfigSources['disableCacheControl'] =
|
||||
sources['disableCacheControl'];
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
// Full refresh path
|
||||
await this.refreshAuth(authType);
|
||||
}
|
||||
|
||||
setFallbackMode(active: boolean): void {
|
||||
this.inFallbackMode = active;
|
||||
/**
|
||||
* Get available models for the current authType.
|
||||
* Delegates to ModelsConfig.
|
||||
*/
|
||||
getAvailableModels(): AvailableModel[] {
|
||||
return this._modelsConfig.getAvailableModels();
|
||||
}
|
||||
|
||||
setFallbackModelHandler(handler: FallbackModelHandler): void {
|
||||
this.fallbackModelHandler = handler;
|
||||
/**
|
||||
* Get available models for a specific authType.
|
||||
* Delegates to ModelsConfig.
|
||||
*/
|
||||
getAvailableModelsForAuthType(authType: AuthType): AvailableModel[] {
|
||||
return this._modelsConfig.getAvailableModelsForAuthType(authType);
|
||||
}
|
||||
|
||||
/**
|
||||
* Switch authType+model via registry-backed selection.
|
||||
* This triggers a refresh of the ContentGenerator when required (always on authType changes).
|
||||
* For qwen-oauth model switches that are hot-update safe, this may update in place.
|
||||
*
|
||||
* @param authType - Target authentication type
|
||||
* @param modelId - Target model ID
|
||||
* @param options - Additional options like requireCachedCredentials
|
||||
* @param metadata - Metadata for logging/tracking
|
||||
*/
|
||||
async switchModel(
|
||||
authType: AuthType,
|
||||
modelId: string,
|
||||
options?: { requireCachedCredentials?: boolean },
|
||||
metadata?: { reason?: string; context?: string },
|
||||
): Promise<void> {
|
||||
await this._modelsConfig.switchModel(authType, modelId, options, metadata);
|
||||
}
|
||||
|
||||
getMaxSessionTurns(): number {
|
||||
@@ -802,14 +935,6 @@ export class Config {
|
||||
return this.sessionTokenLimit;
|
||||
}
|
||||
|
||||
setQuotaErrorOccurred(value: boolean): void {
|
||||
this.quotaErrorOccurred = value;
|
||||
}
|
||||
|
||||
getQuotaErrorOccurred(): boolean {
|
||||
return this.quotaErrorOccurred;
|
||||
}
|
||||
|
||||
getEmbeddingModel(): string {
|
||||
return this.embeddingModel;
|
||||
}
|
||||
@@ -1151,7 +1276,7 @@ export class Config {
|
||||
}
|
||||
|
||||
getAuthType(): AuthType | undefined {
|
||||
return this.contentGeneratorConfig.authType;
|
||||
return this.contentGeneratorConfig?.authType;
|
||||
}
|
||||
|
||||
getCliVersion(): string | undefined {
|
||||
|
||||
@@ -1,99 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Google LLC
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { describe, it, expect, beforeEach, vi } from 'vitest';
|
||||
import { Config } from './config.js';
|
||||
import { DEFAULT_GEMINI_MODEL, DEFAULT_GEMINI_FLASH_MODEL } from './models.js';
|
||||
import fs from 'node:fs';
|
||||
|
||||
vi.mock('node:fs');
|
||||
|
||||
describe('Flash Model Fallback Configuration', () => {
|
||||
let config: Config;
|
||||
|
||||
beforeEach(() => {
|
||||
vi.mocked(fs.existsSync).mockReturnValue(true);
|
||||
vi.mocked(fs.statSync).mockReturnValue({
|
||||
isDirectory: () => true,
|
||||
} as fs.Stats);
|
||||
config = new Config({
|
||||
targetDir: '/test',
|
||||
debugMode: false,
|
||||
cwd: '/test',
|
||||
model: DEFAULT_GEMINI_MODEL,
|
||||
});
|
||||
|
||||
// Initialize contentGeneratorConfig for testing
|
||||
(
|
||||
config as unknown as { contentGeneratorConfig: unknown }
|
||||
).contentGeneratorConfig = {
|
||||
model: DEFAULT_GEMINI_MODEL,
|
||||
authType: 'gemini-api-key',
|
||||
};
|
||||
});
|
||||
|
||||
// These tests do not actually test fallback. isInFallbackMode() only returns true,
|
||||
// when setFallbackMode is marked as true. This is to decouple setting a model
|
||||
// with the fallback mechanism. This will be necessary we introduce more
|
||||
// intelligent model routing.
|
||||
describe('setModel', () => {
|
||||
it('should only mark as switched if contentGeneratorConfig exists', async () => {
|
||||
// Create config without initializing contentGeneratorConfig
|
||||
const newConfig = new Config({
|
||||
targetDir: '/test',
|
||||
debugMode: false,
|
||||
cwd: '/test',
|
||||
model: DEFAULT_GEMINI_MODEL,
|
||||
});
|
||||
|
||||
// Should not crash when contentGeneratorConfig is undefined
|
||||
await newConfig.setModel(DEFAULT_GEMINI_FLASH_MODEL);
|
||||
expect(newConfig.isInFallbackMode()).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('getModel', () => {
|
||||
it('should return contentGeneratorConfig model if available', async () => {
|
||||
// Simulate initialized content generator config
|
||||
await config.setModel(DEFAULT_GEMINI_FLASH_MODEL);
|
||||
expect(config.getModel()).toBe(DEFAULT_GEMINI_FLASH_MODEL);
|
||||
});
|
||||
|
||||
it('should fall back to initial model if contentGeneratorConfig is not available', () => {
|
||||
// Test with fresh config where contentGeneratorConfig might not be set
|
||||
const newConfig = new Config({
|
||||
targetDir: '/test',
|
||||
debugMode: false,
|
||||
cwd: '/test',
|
||||
model: 'custom-model',
|
||||
});
|
||||
|
||||
expect(newConfig.getModel()).toBe('custom-model');
|
||||
});
|
||||
});
|
||||
|
||||
describe('isInFallbackMode', () => {
|
||||
it('should start as false for new session', () => {
|
||||
expect(config.isInFallbackMode()).toBe(false);
|
||||
});
|
||||
|
||||
it('should remain false if no model switch occurs', () => {
|
||||
// Perform other operations that don't involve model switching
|
||||
expect(config.isInFallbackMode()).toBe(false);
|
||||
});
|
||||
|
||||
it('should persist switched state throughout session', async () => {
|
||||
await config.setModel(DEFAULT_GEMINI_FLASH_MODEL);
|
||||
// Setting state for fallback mode as is expected of clients
|
||||
config.setFallbackMode(true);
|
||||
expect(config.isInFallbackMode()).toBe(true);
|
||||
|
||||
// Should remain true even after getting model
|
||||
config.getModel();
|
||||
expect(config.isInFallbackMode()).toBe(true);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -1,83 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Google LLC
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { describe, it, expect } from 'vitest';
|
||||
import {
|
||||
getEffectiveModel,
|
||||
DEFAULT_GEMINI_MODEL,
|
||||
DEFAULT_GEMINI_FLASH_MODEL,
|
||||
DEFAULT_GEMINI_FLASH_LITE_MODEL,
|
||||
} from './models.js';
|
||||
|
||||
describe('getEffectiveModel', () => {
|
||||
describe('When NOT in fallback mode', () => {
|
||||
const isInFallbackMode = false;
|
||||
|
||||
it('should return the Pro model when Pro is requested', () => {
|
||||
const model = getEffectiveModel(isInFallbackMode, DEFAULT_GEMINI_MODEL);
|
||||
expect(model).toBe(DEFAULT_GEMINI_MODEL);
|
||||
});
|
||||
|
||||
it('should return the Flash model when Flash is requested', () => {
|
||||
const model = getEffectiveModel(
|
||||
isInFallbackMode,
|
||||
DEFAULT_GEMINI_FLASH_MODEL,
|
||||
);
|
||||
expect(model).toBe(DEFAULT_GEMINI_FLASH_MODEL);
|
||||
});
|
||||
|
||||
it('should return the Lite model when Lite is requested', () => {
|
||||
const model = getEffectiveModel(
|
||||
isInFallbackMode,
|
||||
DEFAULT_GEMINI_FLASH_LITE_MODEL,
|
||||
);
|
||||
expect(model).toBe(DEFAULT_GEMINI_FLASH_LITE_MODEL);
|
||||
});
|
||||
|
||||
it('should return a custom model name when requested', () => {
|
||||
const customModel = 'custom-model-v1';
|
||||
const model = getEffectiveModel(isInFallbackMode, customModel);
|
||||
expect(model).toBe(customModel);
|
||||
});
|
||||
});
|
||||
|
||||
describe('When IN fallback mode', () => {
|
||||
const isInFallbackMode = true;
|
||||
|
||||
it('should downgrade the Pro model to the Flash model', () => {
|
||||
const model = getEffectiveModel(isInFallbackMode, DEFAULT_GEMINI_MODEL);
|
||||
expect(model).toBe(DEFAULT_GEMINI_FLASH_MODEL);
|
||||
});
|
||||
|
||||
it('should return the Flash model when Flash is requested', () => {
|
||||
const model = getEffectiveModel(
|
||||
isInFallbackMode,
|
||||
DEFAULT_GEMINI_FLASH_MODEL,
|
||||
);
|
||||
expect(model).toBe(DEFAULT_GEMINI_FLASH_MODEL);
|
||||
});
|
||||
|
||||
it('should HONOR the Lite model when Lite is requested', () => {
|
||||
const model = getEffectiveModel(
|
||||
isInFallbackMode,
|
||||
DEFAULT_GEMINI_FLASH_LITE_MODEL,
|
||||
);
|
||||
expect(model).toBe(DEFAULT_GEMINI_FLASH_LITE_MODEL);
|
||||
});
|
||||
|
||||
it('should HONOR any model with "lite" in its name', () => {
|
||||
const customLiteModel = 'gemini-2.5-custom-lite-vNext';
|
||||
const model = getEffectiveModel(isInFallbackMode, customLiteModel);
|
||||
expect(model).toBe(customLiteModel);
|
||||
});
|
||||
|
||||
it('should downgrade any other custom model to the Flash model', () => {
|
||||
const customModel = 'custom-model-v1-unlisted';
|
||||
const model = getEffectiveModel(isInFallbackMode, customModel);
|
||||
expect(model).toBe(DEFAULT_GEMINI_FLASH_MODEL);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -7,46 +7,3 @@
|
||||
export const DEFAULT_QWEN_MODEL = 'coder-model';
|
||||
export const DEFAULT_QWEN_FLASH_MODEL = 'coder-model';
|
||||
export const DEFAULT_QWEN_EMBEDDING_MODEL = 'text-embedding-v4';
|
||||
|
||||
export const DEFAULT_GEMINI_MODEL = 'coder-model';
|
||||
export const DEFAULT_GEMINI_FLASH_MODEL = 'gemini-2.5-flash';
|
||||
export const DEFAULT_GEMINI_FLASH_LITE_MODEL = 'gemini-2.5-flash-lite';
|
||||
|
||||
export const DEFAULT_GEMINI_MODEL_AUTO = 'auto';
|
||||
|
||||
export const DEFAULT_GEMINI_EMBEDDING_MODEL = 'gemini-embedding-001';
|
||||
|
||||
// Some thinking models do not default to dynamic thinking which is done by a value of -1
|
||||
export const DEFAULT_THINKING_MODE = -1;
|
||||
|
||||
/**
|
||||
* Determines the effective model to use, applying fallback logic if necessary.
|
||||
*
|
||||
* When fallback mode is active, this function enforces the use of the standard
|
||||
* fallback model. However, it makes an exception for "lite" models (any model
|
||||
* with "lite" in its name), allowing them to be used to preserve cost savings.
|
||||
* This ensures that "pro" models are always downgraded, while "lite" model
|
||||
* requests are honored.
|
||||
*
|
||||
* @param isInFallbackMode Whether the application is in fallback mode.
|
||||
* @param requestedModel The model that was originally requested.
|
||||
* @returns The effective model name.
|
||||
*/
|
||||
export function getEffectiveModel(
|
||||
isInFallbackMode: boolean,
|
||||
requestedModel: string,
|
||||
): string {
|
||||
// If we are not in fallback mode, simply use the requested model.
|
||||
if (!isInFallbackMode) {
|
||||
return requestedModel;
|
||||
}
|
||||
|
||||
// If a "lite" model is requested, honor it. This allows for variations of
|
||||
// lite models without needing to list them all as constants.
|
||||
if (requestedModel.includes('lite')) {
|
||||
return requestedModel;
|
||||
}
|
||||
|
||||
// Default fallback for Gemini CLI.
|
||||
return DEFAULT_GEMINI_FLASH_MODEL;
|
||||
}
|
||||
|
||||
@@ -32,7 +32,7 @@ import {
|
||||
type ChatCompressionInfo,
|
||||
} from './turn.js';
|
||||
import { getCoreSystemPrompt } from './prompts.js';
|
||||
import { DEFAULT_GEMINI_FLASH_MODEL } from '../config/models.js';
|
||||
import { DEFAULT_QWEN_FLASH_MODEL } from '../config/models.js';
|
||||
import { FileDiscoveryService } from '../services/fileDiscoveryService.js';
|
||||
import { setSimulate429 } from '../utils/testUtils.js';
|
||||
import { tokenLimit } from './tokenLimits.js';
|
||||
@@ -302,8 +302,6 @@ describe('Gemini Client (client.ts)', () => {
|
||||
getFileService: vi.fn().mockReturnValue(fileService),
|
||||
getMaxSessionTurns: vi.fn().mockReturnValue(0),
|
||||
getSessionTokenLimit: vi.fn().mockReturnValue(32000),
|
||||
getQuotaErrorOccurred: vi.fn().mockReturnValue(false),
|
||||
setQuotaErrorOccurred: vi.fn(),
|
||||
getNoBrowser: vi.fn().mockReturnValue(false),
|
||||
getUsageStatisticsEnabled: vi.fn().mockReturnValue(true),
|
||||
getApprovalMode: vi.fn().mockReturnValue(ApprovalMode.DEFAULT),
|
||||
@@ -317,8 +315,6 @@ describe('Gemini Client (client.ts)', () => {
|
||||
getModelRouterService: vi.fn().mockReturnValue({
|
||||
route: vi.fn().mockResolvedValue({ model: 'default-routed-model' }),
|
||||
}),
|
||||
isInFallbackMode: vi.fn().mockReturnValue(false),
|
||||
setFallbackMode: vi.fn(),
|
||||
getCliVersion: vi.fn().mockReturnValue('1.0.0'),
|
||||
getChatCompression: vi.fn().mockReturnValue(undefined),
|
||||
getSkipNextSpeakerCheck: vi.fn().mockReturnValue(false),
|
||||
@@ -1062,26 +1058,18 @@ describe('Gemini Client (client.ts)', () => {
|
||||
|
||||
// Assert
|
||||
expect(ideContextStore.get).toHaveBeenCalled();
|
||||
const expectedContext = `
|
||||
Here is the user's editor context as a JSON object. This is for your information only.
|
||||
\`\`\`json
|
||||
${JSON.stringify(
|
||||
{
|
||||
activeFile: {
|
||||
path: '/path/to/active/file.ts',
|
||||
cursor: {
|
||||
line: 5,
|
||||
character: 10,
|
||||
},
|
||||
selectedText: 'hello',
|
||||
},
|
||||
otherOpenFiles: ['/path/to/recent/file1.ts', '/path/to/recent/file2.ts'],
|
||||
},
|
||||
null,
|
||||
2,
|
||||
)}
|
||||
const expectedContext = `Here is the user's editor context. This is for your information only.
|
||||
Active file:
|
||||
Path: /path/to/active/file.ts
|
||||
Cursor: line 5, character 10
|
||||
Selected text:
|
||||
\`\`\`
|
||||
`.trim();
|
||||
hello
|
||||
\`\`\`
|
||||
|
||||
Other open files:
|
||||
- /path/to/recent/file1.ts
|
||||
- /path/to/recent/file2.ts`;
|
||||
const expectedRequest = [{ text: expectedContext }];
|
||||
expect(mockChat.addHistory).toHaveBeenCalledWith({
|
||||
role: 'user',
|
||||
@@ -1181,25 +1169,14 @@ ${JSON.stringify(
|
||||
|
||||
// Assert
|
||||
expect(ideContextStore.get).toHaveBeenCalled();
|
||||
const expectedContext = `
|
||||
Here is the user's editor context as a JSON object. This is for your information only.
|
||||
\`\`\`json
|
||||
${JSON.stringify(
|
||||
{
|
||||
activeFile: {
|
||||
path: '/path/to/active/file.ts',
|
||||
cursor: {
|
||||
line: 5,
|
||||
character: 10,
|
||||
},
|
||||
selectedText: 'hello',
|
||||
},
|
||||
},
|
||||
null,
|
||||
2,
|
||||
)}
|
||||
const expectedContext = `Here is the user's editor context. This is for your information only.
|
||||
Active file:
|
||||
Path: /path/to/active/file.ts
|
||||
Cursor: line 5, character 10
|
||||
Selected text:
|
||||
\`\`\`
|
||||
`.trim();
|
||||
hello
|
||||
\`\`\``;
|
||||
const expectedRequest = [{ text: expectedContext }];
|
||||
expect(mockChat.addHistory).toHaveBeenCalledWith({
|
||||
role: 'user',
|
||||
@@ -1258,18 +1235,10 @@ ${JSON.stringify(
|
||||
|
||||
// Assert
|
||||
expect(ideContextStore.get).toHaveBeenCalled();
|
||||
const expectedContext = `
|
||||
Here is the user's editor context as a JSON object. This is for your information only.
|
||||
\`\`\`json
|
||||
${JSON.stringify(
|
||||
{
|
||||
otherOpenFiles: ['/path/to/recent/file1.ts', '/path/to/recent/file2.ts'],
|
||||
},
|
||||
null,
|
||||
2,
|
||||
)}
|
||||
\`\`\`
|
||||
`.trim();
|
||||
const expectedContext = `Here is the user's editor context. This is for your information only.
|
||||
Other open files:
|
||||
- /path/to/recent/file1.ts
|
||||
- /path/to/recent/file2.ts`;
|
||||
const expectedRequest = [{ text: expectedContext }];
|
||||
expect(mockChat.addHistory).toHaveBeenCalledWith({
|
||||
role: 'user',
|
||||
@@ -1786,11 +1755,9 @@ ${JSON.stringify(
|
||||
// Also verify it's the full context, not a delta.
|
||||
const call = mockChat.addHistory.mock.calls[0][0];
|
||||
const contextText = call.parts[0].text;
|
||||
const contextJson = JSON.parse(
|
||||
contextText.match(/```json\n(.*)\n```/s)![1],
|
||||
);
|
||||
expect(contextJson).toHaveProperty('activeFile');
|
||||
expect(contextJson.activeFile.path).toBe('/path/to/active/file.ts');
|
||||
// Verify it contains the active file information in plain text format
|
||||
expect(contextText).toContain('Active file:');
|
||||
expect(contextText).toContain('Path: /path/to/active/file.ts');
|
||||
});
|
||||
});
|
||||
|
||||
@@ -1993,7 +1960,7 @@ ${JSON.stringify(
|
||||
);
|
||||
expect(contextCall).toBeDefined();
|
||||
expect(JSON.stringify(contextCall![0])).toContain(
|
||||
"Here is the user's editor context as a JSON object",
|
||||
"Here is the user's editor context.",
|
||||
);
|
||||
// Check that the sent context is the new one (fileB.ts)
|
||||
expect(JSON.stringify(contextCall![0])).toContain('fileB.ts');
|
||||
@@ -2029,9 +1996,7 @@ ${JSON.stringify(
|
||||
|
||||
// Assert: Full context for fileA.ts was sent and stored.
|
||||
const initialCall = vi.mocked(mockChat.addHistory!).mock.calls[0][0];
|
||||
expect(JSON.stringify(initialCall)).toContain(
|
||||
"user's editor context as a JSON object",
|
||||
);
|
||||
expect(JSON.stringify(initialCall)).toContain("user's editor context.");
|
||||
expect(JSON.stringify(initialCall)).toContain('fileA.ts');
|
||||
// This implicitly tests that `lastSentIdeContext` is now set internally by the client.
|
||||
vi.mocked(mockChat.addHistory!).mockClear();
|
||||
@@ -2129,9 +2094,9 @@ ${JSON.stringify(
|
||||
const finalCall = vi.mocked(mockChat.addHistory!).mock.calls[0][0];
|
||||
expect(JSON.stringify(finalCall)).toContain('summary of changes');
|
||||
// The delta should reflect fileA being closed and fileC being opened.
|
||||
expect(JSON.stringify(finalCall)).toContain('filesClosed');
|
||||
expect(JSON.stringify(finalCall)).toContain('Files closed');
|
||||
expect(JSON.stringify(finalCall)).toContain('fileA.ts');
|
||||
expect(JSON.stringify(finalCall)).toContain('activeFileChanged');
|
||||
expect(JSON.stringify(finalCall)).toContain('Active file changed');
|
||||
expect(JSON.stringify(finalCall)).toContain('fileC.ts');
|
||||
});
|
||||
});
|
||||
@@ -2262,12 +2227,12 @@ ${JSON.stringify(
|
||||
contents,
|
||||
generationConfig,
|
||||
abortSignal,
|
||||
DEFAULT_GEMINI_FLASH_MODEL,
|
||||
DEFAULT_QWEN_FLASH_MODEL,
|
||||
);
|
||||
|
||||
expect(mockContentGenerator.generateContent).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
model: DEFAULT_GEMINI_FLASH_MODEL,
|
||||
model: DEFAULT_QWEN_FLASH_MODEL,
|
||||
config: expect.objectContaining({
|
||||
abortSignal,
|
||||
systemInstruction: getCoreSystemPrompt(''),
|
||||
@@ -2290,7 +2255,7 @@ ${JSON.stringify(
|
||||
contents,
|
||||
{},
|
||||
new AbortController().signal,
|
||||
DEFAULT_GEMINI_FLASH_MODEL,
|
||||
DEFAULT_QWEN_FLASH_MODEL,
|
||||
);
|
||||
|
||||
expect(mockContentGenerator.generateContent).not.toHaveBeenCalledWith({
|
||||
@@ -2300,7 +2265,7 @@ ${JSON.stringify(
|
||||
});
|
||||
expect(mockContentGenerator.generateContent).toHaveBeenCalledWith(
|
||||
{
|
||||
model: DEFAULT_GEMINI_FLASH_MODEL,
|
||||
model: DEFAULT_QWEN_FLASH_MODEL,
|
||||
config: expect.any(Object),
|
||||
contents,
|
||||
},
|
||||
@@ -2308,28 +2273,7 @@ ${JSON.stringify(
|
||||
);
|
||||
});
|
||||
|
||||
it('should use the Flash model when fallback mode is active', async () => {
|
||||
const contents = [{ role: 'user', parts: [{ text: 'hello' }] }];
|
||||
const generationConfig = { temperature: 0.5 };
|
||||
const abortSignal = new AbortController().signal;
|
||||
const requestedModel = 'gemini-2.5-pro'; // A non-flash model
|
||||
|
||||
// Mock config to be in fallback mode
|
||||
vi.spyOn(client['config'], 'isInFallbackMode').mockReturnValue(true);
|
||||
|
||||
await client.generateContent(
|
||||
contents,
|
||||
generationConfig,
|
||||
abortSignal,
|
||||
requestedModel,
|
||||
);
|
||||
|
||||
expect(mockGenerateContentFn).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
model: DEFAULT_GEMINI_FLASH_MODEL,
|
||||
}),
|
||||
'test-session-id',
|
||||
);
|
||||
});
|
||||
// Note: there is currently no "fallback mode" model routing; the model used
|
||||
// is always the one explicitly requested by the caller.
|
||||
});
|
||||
});
|
||||
|
||||
@@ -15,7 +15,6 @@ import type {
|
||||
|
||||
// Config
|
||||
import { ApprovalMode, type Config } from '../config/config.js';
|
||||
import { DEFAULT_GEMINI_FLASH_MODEL } from '../config/models.js';
|
||||
|
||||
// Core modules
|
||||
import type { ContentGenerator } from './contentGenerator.js';
|
||||
@@ -219,42 +218,48 @@ export class GeminiClient {
|
||||
}
|
||||
|
||||
if (forceFullContext || !this.lastSentIdeContext) {
|
||||
// Send full context as JSON
|
||||
// Send full context as plain text
|
||||
const openFiles = currentIdeContext.workspaceState?.openFiles || [];
|
||||
const activeFile = openFiles.find((f) => f.isActive);
|
||||
const otherOpenFiles = openFiles
|
||||
.filter((f) => !f.isActive)
|
||||
.map((f) => f.path);
|
||||
|
||||
const contextData: Record<string, unknown> = {};
|
||||
const contextLines: string[] = [];
|
||||
|
||||
if (activeFile) {
|
||||
contextData['activeFile'] = {
|
||||
path: activeFile.path,
|
||||
cursor: activeFile.cursor
|
||||
? {
|
||||
line: activeFile.cursor.line,
|
||||
character: activeFile.cursor.character,
|
||||
}
|
||||
: undefined,
|
||||
selectedText: activeFile.selectedText || undefined,
|
||||
};
|
||||
contextLines.push('Active file:');
|
||||
contextLines.push(` Path: ${activeFile.path}`);
|
||||
if (activeFile.cursor) {
|
||||
contextLines.push(
|
||||
` Cursor: line ${activeFile.cursor.line}, character ${activeFile.cursor.character}`,
|
||||
);
|
||||
}
|
||||
if (activeFile.selectedText) {
|
||||
contextLines.push(' Selected text:');
|
||||
contextLines.push('```');
|
||||
contextLines.push(activeFile.selectedText);
|
||||
contextLines.push('```');
|
||||
}
|
||||
}
|
||||
|
||||
if (otherOpenFiles.length > 0) {
|
||||
contextData['otherOpenFiles'] = otherOpenFiles;
|
||||
if (contextLines.length > 0) {
|
||||
contextLines.push('');
|
||||
}
|
||||
contextLines.push('Other open files:');
|
||||
for (const filePath of otherOpenFiles) {
|
||||
contextLines.push(` - ${filePath}`);
|
||||
}
|
||||
}
|
||||
|
||||
if (Object.keys(contextData).length === 0) {
|
||||
if (contextLines.length === 0) {
|
||||
return { contextParts: [], newIdeContext: currentIdeContext };
|
||||
}
|
||||
|
||||
const jsonString = JSON.stringify(contextData, null, 2);
|
||||
const contextParts = [
|
||||
"Here is the user's editor context as a JSON object. This is for your information only.",
|
||||
'```json',
|
||||
jsonString,
|
||||
'```',
|
||||
"Here is the user's editor context. This is for your information only.",
|
||||
contextLines.join('\n'),
|
||||
];
|
||||
|
||||
if (this.config.getDebugMode()) {
|
||||
@@ -265,9 +270,8 @@ export class GeminiClient {
|
||||
newIdeContext: currentIdeContext,
|
||||
};
|
||||
} else {
|
||||
// Calculate and send delta as JSON
|
||||
const delta: Record<string, unknown> = {};
|
||||
const changes: Record<string, unknown> = {};
|
||||
// Calculate and send delta as plain text
|
||||
const changeLines: string[] = [];
|
||||
|
||||
const lastFiles = new Map(
|
||||
(this.lastSentIdeContext.workspaceState?.openFiles || []).map(
|
||||
@@ -288,7 +292,10 @@ export class GeminiClient {
|
||||
}
|
||||
}
|
||||
if (openedFiles.length > 0) {
|
||||
changes['filesOpened'] = openedFiles;
|
||||
changeLines.push('Files opened:');
|
||||
for (const filePath of openedFiles) {
|
||||
changeLines.push(` - ${filePath}`);
|
||||
}
|
||||
}
|
||||
|
||||
const closedFiles: string[] = [];
|
||||
@@ -298,7 +305,13 @@ export class GeminiClient {
|
||||
}
|
||||
}
|
||||
if (closedFiles.length > 0) {
|
||||
changes['filesClosed'] = closedFiles;
|
||||
if (changeLines.length > 0) {
|
||||
changeLines.push('');
|
||||
}
|
||||
changeLines.push('Files closed:');
|
||||
for (const filePath of closedFiles) {
|
||||
changeLines.push(` - ${filePath}`);
|
||||
}
|
||||
}
|
||||
|
||||
const lastActiveFile = (
|
||||
@@ -310,16 +323,22 @@ export class GeminiClient {
|
||||
|
||||
if (currentActiveFile) {
|
||||
if (!lastActiveFile || lastActiveFile.path !== currentActiveFile.path) {
|
||||
changes['activeFileChanged'] = {
|
||||
path: currentActiveFile.path,
|
||||
cursor: currentActiveFile.cursor
|
||||
? {
|
||||
line: currentActiveFile.cursor.line,
|
||||
character: currentActiveFile.cursor.character,
|
||||
}
|
||||
: undefined,
|
||||
selectedText: currentActiveFile.selectedText || undefined,
|
||||
};
|
||||
if (changeLines.length > 0) {
|
||||
changeLines.push('');
|
||||
}
|
||||
changeLines.push('Active file changed:');
|
||||
changeLines.push(` Path: ${currentActiveFile.path}`);
|
||||
if (currentActiveFile.cursor) {
|
||||
changeLines.push(
|
||||
` Cursor: line ${currentActiveFile.cursor.line}, character ${currentActiveFile.cursor.character}`,
|
||||
);
|
||||
}
|
||||
if (currentActiveFile.selectedText) {
|
||||
changeLines.push(' Selected text:');
|
||||
changeLines.push('```');
|
||||
changeLines.push(currentActiveFile.selectedText);
|
||||
changeLines.push('```');
|
||||
}
|
||||
} else {
|
||||
const lastCursor = lastActiveFile.cursor;
|
||||
const currentCursor = currentActiveFile.cursor;
|
||||
@@ -329,42 +348,50 @@ export class GeminiClient {
|
||||
lastCursor.line !== currentCursor.line ||
|
||||
lastCursor.character !== currentCursor.character)
|
||||
) {
|
||||
changes['cursorMoved'] = {
|
||||
path: currentActiveFile.path,
|
||||
cursor: {
|
||||
line: currentCursor.line,
|
||||
character: currentCursor.character,
|
||||
},
|
||||
};
|
||||
if (changeLines.length > 0) {
|
||||
changeLines.push('');
|
||||
}
|
||||
changeLines.push('Cursor moved:');
|
||||
changeLines.push(` Path: ${currentActiveFile.path}`);
|
||||
changeLines.push(
|
||||
` New position: line ${currentCursor.line}, character ${currentCursor.character}`,
|
||||
);
|
||||
}
|
||||
|
||||
const lastSelectedText = lastActiveFile.selectedText || '';
|
||||
const currentSelectedText = currentActiveFile.selectedText || '';
|
||||
if (lastSelectedText !== currentSelectedText) {
|
||||
changes['selectionChanged'] = {
|
||||
path: currentActiveFile.path,
|
||||
selectedText: currentSelectedText,
|
||||
};
|
||||
if (changeLines.length > 0) {
|
||||
changeLines.push('');
|
||||
}
|
||||
changeLines.push('Selection changed:');
|
||||
changeLines.push(` Path: ${currentActiveFile.path}`);
|
||||
if (currentSelectedText) {
|
||||
changeLines.push(' Selected text:');
|
||||
changeLines.push('```');
|
||||
changeLines.push(currentSelectedText);
|
||||
changeLines.push('```');
|
||||
} else {
|
||||
changeLines.push(' Selected text: (none)');
|
||||
}
|
||||
}
|
||||
}
|
||||
} else if (lastActiveFile) {
|
||||
changes['activeFileChanged'] = {
|
||||
path: null,
|
||||
previousPath: lastActiveFile.path,
|
||||
};
|
||||
if (changeLines.length > 0) {
|
||||
changeLines.push('');
|
||||
}
|
||||
changeLines.push('Active file changed:');
|
||||
changeLines.push(' No active file');
|
||||
changeLines.push(` Previous path: ${lastActiveFile.path}`);
|
||||
}
|
||||
|
||||
if (Object.keys(changes).length === 0) {
|
||||
if (changeLines.length === 0) {
|
||||
return { contextParts: [], newIdeContext: currentIdeContext };
|
||||
}
|
||||
|
||||
delta['changes'] = changes;
|
||||
const jsonString = JSON.stringify(delta, null, 2);
|
||||
const contextParts = [
|
||||
"Here is a summary of changes in the user's editor context, in JSON format. This is for your information only.",
|
||||
'```json',
|
||||
jsonString,
|
||||
'```',
|
||||
"Here is a summary of changes in the user's editor context. This is for your information only.",
|
||||
changeLines.join('\n'),
|
||||
];
|
||||
|
||||
if (this.config.getDebugMode()) {
|
||||
@@ -542,11 +569,6 @@ export class GeminiClient {
|
||||
}
|
||||
}
|
||||
if (!turn.pendingToolCalls.length && signal && !signal.aborted) {
|
||||
// Check if next speaker check is needed
|
||||
if (this.config.getQuotaErrorOccurred()) {
|
||||
return turn;
|
||||
}
|
||||
|
||||
if (this.config.getSkipNextSpeakerCheck()) {
|
||||
return turn;
|
||||
}
|
||||
@@ -602,14 +624,11 @@ export class GeminiClient {
|
||||
};
|
||||
|
||||
const apiCall = () => {
|
||||
const modelToUse = this.config.isInFallbackMode()
|
||||
? DEFAULT_GEMINI_FLASH_MODEL
|
||||
: model;
|
||||
currentAttemptModel = modelToUse;
|
||||
currentAttemptModel = model;
|
||||
|
||||
return this.getContentGeneratorOrFail().generateContent(
|
||||
{
|
||||
model: modelToUse,
|
||||
model,
|
||||
config: requestConfig,
|
||||
contents,
|
||||
},
|
||||
|
||||
@@ -5,7 +5,11 @@
|
||||
*/
|
||||
|
||||
import { describe, it, expect, vi } from 'vitest';
|
||||
import { createContentGenerator, AuthType } from './contentGenerator.js';
|
||||
import {
|
||||
createContentGenerator,
|
||||
createContentGeneratorConfig,
|
||||
AuthType,
|
||||
} from './contentGenerator.js';
|
||||
import { GoogleGenAI } from '@google/genai';
|
||||
import type { Config } from '../config/config.js';
|
||||
import { LoggingContentGenerator } from './loggingContentGenerator/index.js';
|
||||
@@ -78,3 +82,32 @@ describe('createContentGenerator', () => {
|
||||
expect(generator).toBeInstanceOf(LoggingContentGenerator);
|
||||
});
|
||||
});
|
||||
|
||||
describe('createContentGeneratorConfig', () => {
|
||||
const mockConfig = {
|
||||
getProxy: () => undefined,
|
||||
} as unknown as Config;
|
||||
|
||||
it('should preserve provided fields and set authType for QWEN_OAUTH', () => {
|
||||
const cfg = createContentGeneratorConfig(mockConfig, AuthType.QWEN_OAUTH, {
|
||||
model: 'vision-model',
|
||||
apiKey: 'QWEN_OAUTH_DYNAMIC_TOKEN',
|
||||
});
|
||||
expect(cfg.authType).toBe(AuthType.QWEN_OAUTH);
|
||||
expect(cfg.model).toBe('vision-model');
|
||||
expect(cfg.apiKey).toBe('QWEN_OAUTH_DYNAMIC_TOKEN');
|
||||
});
|
||||
|
||||
it('should not warn or fallback for QWEN_OAUTH (resolution handled by ModelConfigResolver)', () => {
|
||||
const warnSpy = vi
|
||||
.spyOn(console, 'warn')
|
||||
.mockImplementation(() => undefined);
|
||||
const cfg = createContentGeneratorConfig(mockConfig, AuthType.QWEN_OAUTH, {
|
||||
model: 'some-random-model',
|
||||
});
|
||||
expect(cfg.model).toBe('some-random-model');
|
||||
expect(cfg.apiKey).toBeUndefined();
|
||||
expect(warnSpy).not.toHaveBeenCalled();
|
||||
warnSpy.mockRestore();
|
||||
});
|
||||
});
|
||||
|
||||
@@ -12,9 +12,24 @@ import type {
|
||||
GenerateContentParameters,
|
||||
GenerateContentResponse,
|
||||
} from '@google/genai';
|
||||
import { DEFAULT_QWEN_MODEL } from '../config/models.js';
|
||||
import type { Config } from '../config/config.js';
|
||||
import { LoggingContentGenerator } from './loggingContentGenerator/index.js';
|
||||
import type {
|
||||
ConfigSource,
|
||||
ConfigSourceKind,
|
||||
ConfigSources,
|
||||
} from '../utils/configResolver.js';
|
||||
import {
|
||||
getDefaultApiKeyEnvVar,
|
||||
getDefaultModelEnvVar,
|
||||
MissingAnthropicBaseUrlEnvError,
|
||||
MissingApiKeyError,
|
||||
MissingBaseUrlError,
|
||||
MissingModelError,
|
||||
StrictMissingCredentialsError,
|
||||
StrictMissingModelIdError,
|
||||
} from '../models/modelConfigErrors.js';
|
||||
import { PROVIDER_SOURCED_FIELDS } from '../models/modelsConfig.js';
|
||||
|
||||
/**
|
||||
* Interface abstracting the core functionalities for generating content and counting tokens.
|
||||
@@ -48,6 +63,7 @@ export enum AuthType {
|
||||
export type ContentGeneratorConfig = {
|
||||
model: string;
|
||||
apiKey?: string;
|
||||
apiKeyEnvKey?: string;
|
||||
baseUrl?: string;
|
||||
vertexai?: boolean;
|
||||
authType?: AuthType | undefined;
|
||||
@@ -77,102 +93,178 @@ export type ContentGeneratorConfig = {
|
||||
schemaCompliance?: 'auto' | 'openapi_30';
|
||||
};
|
||||
|
||||
export function createContentGeneratorConfig(
|
||||
// Keep the public ContentGeneratorConfigSources API, but reuse the generic
|
||||
// source-tracking types from utils/configResolver to avoid duplication.
|
||||
export type ContentGeneratorConfigSourceKind = ConfigSourceKind;
|
||||
export type ContentGeneratorConfigSource = ConfigSource;
|
||||
export type ContentGeneratorConfigSources = ConfigSources;
|
||||
|
||||
export type ResolvedContentGeneratorConfig = {
|
||||
config: ContentGeneratorConfig;
|
||||
sources: ContentGeneratorConfigSources;
|
||||
};
|
||||
|
||||
function setSource(
|
||||
sources: ContentGeneratorConfigSources,
|
||||
path: string,
|
||||
source: ContentGeneratorConfigSource,
|
||||
): void {
|
||||
sources[path] = source;
|
||||
}
|
||||
|
||||
function getSeedSource(
|
||||
seed: ContentGeneratorConfigSources | undefined,
|
||||
path: string,
|
||||
): ContentGeneratorConfigSource | undefined {
|
||||
return seed?.[path];
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve ContentGeneratorConfig while tracking the source of each effective field.
|
||||
*
|
||||
* This function now primarily validates and finalizes the configuration that has
|
||||
* already been resolved by ModelConfigResolver. The env fallback logic has been
|
||||
* moved to the unified resolver to eliminate duplication.
|
||||
*
|
||||
* Note: The generationConfig passed here should already be fully resolved with
|
||||
* proper source tracking from the caller (CLI/SDK layer).
|
||||
*/
|
||||
export function resolveContentGeneratorConfigWithSources(
|
||||
config: Config,
|
||||
authType: AuthType | undefined,
|
||||
generationConfig?: Partial<ContentGeneratorConfig>,
|
||||
): ContentGeneratorConfig {
|
||||
let newContentGeneratorConfig: Partial<ContentGeneratorConfig> = {
|
||||
seedSources?: ContentGeneratorConfigSources,
|
||||
options?: { strictModelProvider?: boolean },
|
||||
): ResolvedContentGeneratorConfig {
|
||||
const sources: ContentGeneratorConfigSources = { ...(seedSources || {}) };
|
||||
const strictModelProvider = options?.strictModelProvider === true;
|
||||
|
||||
// Build config with computed fields
|
||||
const newContentGeneratorConfig: Partial<ContentGeneratorConfig> = {
|
||||
...(generationConfig || {}),
|
||||
authType,
|
||||
proxy: config?.getProxy(),
|
||||
};
|
||||
|
||||
if (authType === AuthType.QWEN_OAUTH) {
|
||||
// For Qwen OAuth, we'll handle the API key dynamically in createContentGenerator
|
||||
// Set a special marker to indicate this is Qwen OAuth
|
||||
return {
|
||||
...newContentGeneratorConfig,
|
||||
model: DEFAULT_QWEN_MODEL,
|
||||
apiKey: 'QWEN_OAUTH_DYNAMIC_TOKEN',
|
||||
} as ContentGeneratorConfig;
|
||||
// Set sources for computed fields
|
||||
setSource(sources, 'authType', {
|
||||
kind: 'computed',
|
||||
detail: 'provided by caller',
|
||||
});
|
||||
if (config?.getProxy()) {
|
||||
setSource(sources, 'proxy', {
|
||||
kind: 'computed',
|
||||
detail: 'Config.getProxy()',
|
||||
});
|
||||
}
|
||||
|
||||
if (authType === AuthType.USE_OPENAI) {
|
||||
newContentGeneratorConfig = {
|
||||
...newContentGeneratorConfig,
|
||||
apiKey: newContentGeneratorConfig.apiKey || process.env['OPENAI_API_KEY'],
|
||||
baseUrl:
|
||||
newContentGeneratorConfig.baseUrl || process.env['OPENAI_BASE_URL'],
|
||||
model: newContentGeneratorConfig.model || process.env['OPENAI_MODEL'],
|
||||
};
|
||||
// Preserve seed sources for fields that were passed in
|
||||
const seedOrUnknown = (path: string): ContentGeneratorConfigSource =>
|
||||
getSeedSource(seedSources, path) ?? { kind: 'unknown' };
|
||||
|
||||
if (!newContentGeneratorConfig.apiKey) {
|
||||
throw new Error('OPENAI_API_KEY environment variable not found.');
|
||||
}
|
||||
|
||||
return {
|
||||
...newContentGeneratorConfig,
|
||||
model: newContentGeneratorConfig?.model || 'qwen3-coder-plus',
|
||||
} as ContentGeneratorConfig;
|
||||
}
|
||||
|
||||
if (authType === AuthType.USE_ANTHROPIC) {
|
||||
newContentGeneratorConfig = {
|
||||
...newContentGeneratorConfig,
|
||||
apiKey:
|
||||
newContentGeneratorConfig.apiKey || process.env['ANTHROPIC_API_KEY'],
|
||||
baseUrl:
|
||||
newContentGeneratorConfig.baseUrl || process.env['ANTHROPIC_BASE_URL'],
|
||||
model: newContentGeneratorConfig.model || process.env['ANTHROPIC_MODEL'],
|
||||
};
|
||||
|
||||
if (!newContentGeneratorConfig.apiKey) {
|
||||
throw new Error('ANTHROPIC_API_KEY environment variable not found.');
|
||||
}
|
||||
|
||||
if (!newContentGeneratorConfig.baseUrl) {
|
||||
throw new Error('ANTHROPIC_BASE_URL environment variable not found.');
|
||||
}
|
||||
|
||||
if (!newContentGeneratorConfig.model) {
|
||||
throw new Error('ANTHROPIC_MODEL environment variable not found.');
|
||||
for (const field of PROVIDER_SOURCED_FIELDS) {
|
||||
if (generationConfig && field in generationConfig && !sources[field]) {
|
||||
setSource(sources, field, seedOrUnknown(field));
|
||||
}
|
||||
}
|
||||
|
||||
if (authType === AuthType.USE_GEMINI) {
|
||||
newContentGeneratorConfig = {
|
||||
...newContentGeneratorConfig,
|
||||
apiKey: newContentGeneratorConfig.apiKey || process.env['GEMINI_API_KEY'],
|
||||
model: newContentGeneratorConfig.model || process.env['GEMINI_MODEL'],
|
||||
};
|
||||
// Validate required fields based on authType. This does not perform any
|
||||
// fallback resolution (resolution is handled by ModelConfigResolver).
|
||||
const validation = validateModelConfig(
|
||||
newContentGeneratorConfig as ContentGeneratorConfig,
|
||||
strictModelProvider,
|
||||
);
|
||||
if (!validation.valid) {
|
||||
throw new Error(validation.errors.map((e) => e.message).join('\n'));
|
||||
}
|
||||
|
||||
if (!newContentGeneratorConfig.apiKey) {
|
||||
throw new Error('GEMINI_API_KEY environment variable not found.');
|
||||
}
|
||||
return {
|
||||
config: newContentGeneratorConfig as ContentGeneratorConfig,
|
||||
sources,
|
||||
};
|
||||
}
|
||||
|
||||
if (!newContentGeneratorConfig.model) {
|
||||
throw new Error('GEMINI_MODEL environment variable not found.');
|
||||
export interface ModelConfigValidationResult {
|
||||
valid: boolean;
|
||||
errors: Error[];
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate a resolved model configuration.
|
||||
* This is the single validation entry point used across Core.
|
||||
*/
|
||||
export function validateModelConfig(
|
||||
config: ContentGeneratorConfig,
|
||||
isStrictModelProvider: boolean = false,
|
||||
): ModelConfigValidationResult {
|
||||
const errors: Error[] = [];
|
||||
|
||||
// Qwen OAuth doesn't need validation - it uses dynamic tokens
|
||||
if (config.authType === AuthType.QWEN_OAUTH) {
|
||||
return { valid: true, errors: [] };
|
||||
}
|
||||
|
||||
// API key is required for all other auth types
|
||||
if (!config.apiKey) {
|
||||
if (isStrictModelProvider) {
|
||||
errors.push(
|
||||
new StrictMissingCredentialsError(
|
||||
config.authType,
|
||||
config.model,
|
||||
config.apiKeyEnvKey,
|
||||
),
|
||||
);
|
||||
} else {
|
||||
const envKey =
|
||||
config.apiKeyEnvKey || getDefaultApiKeyEnvVar(config.authType);
|
||||
errors.push(
|
||||
new MissingApiKeyError({
|
||||
authType: config.authType,
|
||||
model: config.model,
|
||||
baseUrl: config.baseUrl,
|
||||
envKey,
|
||||
}),
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
if (authType === AuthType.USE_VERTEX_AI) {
|
||||
newContentGeneratorConfig = {
|
||||
...newContentGeneratorConfig,
|
||||
apiKey: newContentGeneratorConfig.apiKey || process.env['GOOGLE_API_KEY'],
|
||||
model: newContentGeneratorConfig.model || process.env['GOOGLE_MODEL'],
|
||||
};
|
||||
|
||||
if (!newContentGeneratorConfig.apiKey) {
|
||||
throw new Error('GOOGLE_API_KEY environment variable not found.');
|
||||
}
|
||||
|
||||
if (!newContentGeneratorConfig.model) {
|
||||
throw new Error('GOOGLE_MODEL environment variable not found.');
|
||||
// Model is required
|
||||
if (!config.model) {
|
||||
if (isStrictModelProvider) {
|
||||
errors.push(new StrictMissingModelIdError(config.authType));
|
||||
} else {
|
||||
const envKey = getDefaultModelEnvVar(config.authType);
|
||||
errors.push(new MissingModelError({ authType: config.authType, envKey }));
|
||||
}
|
||||
}
|
||||
|
||||
return newContentGeneratorConfig as ContentGeneratorConfig;
|
||||
// Explicit baseUrl is required for Anthropic; Migrated from existing code.
|
||||
if (config.authType === AuthType.USE_ANTHROPIC && !config.baseUrl) {
|
||||
if (isStrictModelProvider) {
|
||||
errors.push(
|
||||
new MissingBaseUrlError({
|
||||
authType: config.authType,
|
||||
model: config.model,
|
||||
}),
|
||||
);
|
||||
} else if (config.authType === AuthType.USE_ANTHROPIC) {
|
||||
errors.push(new MissingAnthropicBaseUrlEnvError());
|
||||
}
|
||||
}
|
||||
|
||||
return { valid: errors.length === 0, errors };
|
||||
}
|
||||
|
||||
export function createContentGeneratorConfig(
|
||||
config: Config,
|
||||
authType: AuthType | undefined,
|
||||
generationConfig?: Partial<ContentGeneratorConfig>,
|
||||
): ContentGeneratorConfig {
|
||||
return resolveContentGeneratorConfigWithSources(
|
||||
config,
|
||||
authType,
|
||||
generationConfig,
|
||||
).config;
|
||||
}
|
||||
|
||||
export async function createContentGenerator(
|
||||
@@ -180,11 +272,12 @@ export async function createContentGenerator(
|
||||
gcConfig: Config,
|
||||
isInitialAuth?: boolean,
|
||||
): Promise<ContentGenerator> {
|
||||
if (config.authType === AuthType.USE_OPENAI) {
|
||||
if (!config.apiKey) {
|
||||
throw new Error('OPENAI_API_KEY environment variable not found.');
|
||||
}
|
||||
const validation = validateModelConfig(config, false);
|
||||
if (!validation.valid) {
|
||||
throw new Error(validation.errors.map((e) => e.message).join('\n'));
|
||||
}
|
||||
|
||||
if (config.authType === AuthType.USE_OPENAI) {
|
||||
// Import OpenAIContentGenerator dynamically to avoid circular dependencies
|
||||
const { createOpenAIContentGenerator } = await import(
|
||||
'./openaiContentGenerator/index.js'
|
||||
@@ -223,10 +316,6 @@ export async function createContentGenerator(
|
||||
}
|
||||
|
||||
if (config.authType === AuthType.USE_ANTHROPIC) {
|
||||
if (!config.apiKey) {
|
||||
throw new Error('ANTHROPIC_API_KEY environment variable not found.');
|
||||
}
|
||||
|
||||
const { createAnthropicContentGenerator } = await import(
|
||||
'./anthropicContentGenerator/index.js'
|
||||
);
|
||||
|
||||
@@ -240,7 +240,7 @@ describe('CoreToolScheduler', () => {
|
||||
getAllowedTools: () => [],
|
||||
getContentGeneratorConfig: () => ({
|
||||
model: 'test-model',
|
||||
authType: 'gemini-api-key',
|
||||
authType: 'gemini',
|
||||
}),
|
||||
getShellExecutionConfig: () => ({
|
||||
terminalWidth: 90,
|
||||
@@ -318,7 +318,7 @@ describe('CoreToolScheduler', () => {
|
||||
getAllowedTools: () => [],
|
||||
getContentGeneratorConfig: () => ({
|
||||
model: 'test-model',
|
||||
authType: 'gemini-api-key',
|
||||
authType: 'gemini',
|
||||
}),
|
||||
getShellExecutionConfig: () => ({
|
||||
terminalWidth: 90,
|
||||
@@ -497,7 +497,7 @@ describe('CoreToolScheduler', () => {
|
||||
getExcludeTools: () => ['write_file', 'edit', 'run_shell_command'],
|
||||
getContentGeneratorConfig: () => ({
|
||||
model: 'test-model',
|
||||
authType: 'gemini-api-key',
|
||||
authType: 'gemini',
|
||||
}),
|
||||
getShellExecutionConfig: () => ({
|
||||
terminalWidth: 90,
|
||||
@@ -584,7 +584,7 @@ describe('CoreToolScheduler', () => {
|
||||
getExcludeTools: () => ['write_file', 'edit'], // Different excluded tools
|
||||
getContentGeneratorConfig: () => ({
|
||||
model: 'test-model',
|
||||
authType: 'gemini-api-key',
|
||||
authType: 'gemini',
|
||||
}),
|
||||
getShellExecutionConfig: () => ({
|
||||
terminalWidth: 90,
|
||||
@@ -674,7 +674,7 @@ describe('CoreToolScheduler with payload', () => {
|
||||
getAllowedTools: () => [],
|
||||
getContentGeneratorConfig: () => ({
|
||||
model: 'test-model',
|
||||
authType: 'gemini-api-key',
|
||||
authType: 'gemini',
|
||||
}),
|
||||
getShellExecutionConfig: () => ({
|
||||
terminalWidth: 90,
|
||||
@@ -1001,7 +1001,7 @@ describe('CoreToolScheduler edit cancellation', () => {
|
||||
getAllowedTools: () => [],
|
||||
getContentGeneratorConfig: () => ({
|
||||
model: 'test-model',
|
||||
authType: 'gemini-api-key',
|
||||
authType: 'gemini',
|
||||
}),
|
||||
getShellExecutionConfig: () => ({
|
||||
terminalWidth: 90,
|
||||
@@ -1108,7 +1108,7 @@ describe('CoreToolScheduler YOLO mode', () => {
|
||||
getAllowedTools: () => [],
|
||||
getContentGeneratorConfig: () => ({
|
||||
model: 'test-model',
|
||||
authType: 'gemini-api-key',
|
||||
authType: 'gemini',
|
||||
}),
|
||||
getShellExecutionConfig: () => ({
|
||||
terminalWidth: 90,
|
||||
@@ -1258,7 +1258,7 @@ describe('CoreToolScheduler cancellation during executing with live output', ()
|
||||
getApprovalMode: () => ApprovalMode.DEFAULT,
|
||||
getContentGeneratorConfig: () => ({
|
||||
model: 'test-model',
|
||||
authType: 'gemini-api-key',
|
||||
authType: 'gemini',
|
||||
}),
|
||||
getToolRegistry: () => mockToolRegistry,
|
||||
getShellExecutionConfig: () => ({
|
||||
@@ -1350,7 +1350,7 @@ describe('CoreToolScheduler request queueing', () => {
|
||||
getAllowedTools: () => [],
|
||||
getContentGeneratorConfig: () => ({
|
||||
model: 'test-model',
|
||||
authType: 'gemini-api-key',
|
||||
authType: 'gemini',
|
||||
}),
|
||||
getShellExecutionConfig: () => ({
|
||||
terminalWidth: 90,
|
||||
@@ -1482,7 +1482,7 @@ describe('CoreToolScheduler request queueing', () => {
|
||||
getToolRegistry: () => toolRegistry,
|
||||
getContentGeneratorConfig: () => ({
|
||||
model: 'test-model',
|
||||
authType: 'gemini-api-key',
|
||||
authType: 'gemini',
|
||||
}),
|
||||
getShellExecutionConfig: () => ({
|
||||
terminalWidth: 80,
|
||||
@@ -1586,7 +1586,7 @@ describe('CoreToolScheduler request queueing', () => {
|
||||
getAllowedTools: () => [],
|
||||
getContentGeneratorConfig: () => ({
|
||||
model: 'test-model',
|
||||
authType: 'gemini-api-key',
|
||||
authType: 'gemini',
|
||||
}),
|
||||
getShellExecutionConfig: () => ({
|
||||
terminalWidth: 90,
|
||||
@@ -1854,7 +1854,7 @@ describe('CoreToolScheduler Sequential Execution', () => {
|
||||
getAllowedTools: () => [],
|
||||
getContentGeneratorConfig: () => ({
|
||||
model: 'test-model',
|
||||
authType: 'gemini-api-key',
|
||||
authType: 'gemini',
|
||||
}),
|
||||
getShellExecutionConfig: () => ({
|
||||
terminalWidth: 90,
|
||||
@@ -1975,7 +1975,7 @@ describe('CoreToolScheduler Sequential Execution', () => {
|
||||
getAllowedTools: () => [],
|
||||
getContentGeneratorConfig: () => ({
|
||||
model: 'test-model',
|
||||
authType: 'gemini-api-key',
|
||||
authType: 'gemini',
|
||||
}),
|
||||
getShellExecutionConfig: () => ({
|
||||
terminalWidth: 90,
|
||||
|
||||
@@ -20,7 +20,6 @@ import {
|
||||
} from './geminiChat.js';
|
||||
import type { Config } from '../config/config.js';
|
||||
import { setSimulate429 } from '../utils/testUtils.js';
|
||||
import { DEFAULT_GEMINI_FLASH_MODEL } from '../config/models.js';
|
||||
import { AuthType } from './contentGenerator.js';
|
||||
import { type RetryOptions } from '../utils/retry.js';
|
||||
import { uiTelemetryService } from '../telemetry/uiTelemetry.js';
|
||||
@@ -112,15 +111,11 @@ describe('GeminiChat', () => {
|
||||
getUsageStatisticsEnabled: () => true,
|
||||
getDebugMode: () => false,
|
||||
getContentGeneratorConfig: vi.fn().mockReturnValue({
|
||||
authType: 'gemini-api-key', // Ensure this is set for fallback tests
|
||||
authType: 'gemini', // Ensure this is set for fallback tests
|
||||
model: 'test-model',
|
||||
}),
|
||||
getModel: vi.fn().mockReturnValue('gemini-pro'),
|
||||
setModel: vi.fn(),
|
||||
isInFallbackMode: vi.fn().mockReturnValue(false),
|
||||
getQuotaErrorOccurred: vi.fn().mockReturnValue(false),
|
||||
setQuotaErrorOccurred: vi.fn(),
|
||||
flashFallbackHandler: undefined,
|
||||
getProjectRoot: vi.fn().mockReturnValue('/test/project/root'),
|
||||
getCliVersion: vi.fn().mockReturnValue('1.0.0'),
|
||||
storage: {
|
||||
@@ -1349,9 +1344,8 @@ describe('GeminiChat', () => {
|
||||
],
|
||||
} as unknown as GenerateContentResponse;
|
||||
|
||||
it('should use the FLASH model when in fallback mode (sendMessageStream)', async () => {
|
||||
it('should pass the requested model through to generateContentStream', async () => {
|
||||
vi.mocked(mockConfig.getModel).mockReturnValue('gemini-pro');
|
||||
vi.mocked(mockConfig.isInFallbackMode).mockReturnValue(true);
|
||||
vi.mocked(mockContentGenerator.generateContentStream).mockImplementation(
|
||||
async () =>
|
||||
(async function* () {
|
||||
@@ -1370,7 +1364,7 @@ describe('GeminiChat', () => {
|
||||
|
||||
expect(mockContentGenerator.generateContentStream).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
model: DEFAULT_GEMINI_FLASH_MODEL,
|
||||
model: 'test-model',
|
||||
}),
|
||||
'prompt-id-res3',
|
||||
);
|
||||
@@ -1422,9 +1416,6 @@ describe('GeminiChat', () => {
|
||||
authType,
|
||||
});
|
||||
|
||||
const isInFallbackModeSpy = vi.spyOn(mockConfig, 'isInFallbackMode');
|
||||
isInFallbackModeSpy.mockReturnValue(false);
|
||||
|
||||
vi.mocked(mockContentGenerator.generateContentStream)
|
||||
.mockRejectedValueOnce(error429) // Attempt 1 fails
|
||||
.mockResolvedValueOnce(
|
||||
@@ -1441,10 +1432,7 @@ describe('GeminiChat', () => {
|
||||
})(),
|
||||
);
|
||||
|
||||
mockHandleFallback.mockImplementation(async () => {
|
||||
isInFallbackModeSpy.mockReturnValue(true);
|
||||
return true; // Signal retry
|
||||
});
|
||||
mockHandleFallback.mockImplementation(async () => true);
|
||||
|
||||
const stream = await chat.sendMessageStream(
|
||||
'test-model',
|
||||
|
||||
@@ -19,10 +19,6 @@ import type {
|
||||
import { ApiError, createUserContent } from '@google/genai';
|
||||
import { retryWithBackoff } from '../utils/retry.js';
|
||||
import type { Config } from '../config/config.js';
|
||||
import {
|
||||
DEFAULT_GEMINI_FLASH_MODEL,
|
||||
getEffectiveModel,
|
||||
} from '../config/models.js';
|
||||
import { hasCycleInSchema } from '../tools/tools.js';
|
||||
import type { StructuredError } from './turn.js';
|
||||
import {
|
||||
@@ -352,31 +348,15 @@ export class GeminiChat {
|
||||
params: SendMessageParameters,
|
||||
prompt_id: string,
|
||||
): Promise<AsyncGenerator<GenerateContentResponse>> {
|
||||
const apiCall = () => {
|
||||
const modelToUse = getEffectiveModel(
|
||||
this.config.isInFallbackMode(),
|
||||
model,
|
||||
);
|
||||
|
||||
if (
|
||||
this.config.getQuotaErrorOccurred() &&
|
||||
modelToUse === DEFAULT_GEMINI_FLASH_MODEL
|
||||
) {
|
||||
throw new Error(
|
||||
'Please submit a new query to continue with the Flash model.',
|
||||
);
|
||||
}
|
||||
|
||||
return this.config.getContentGenerator().generateContentStream(
|
||||
const apiCall = () =>
|
||||
this.config.getContentGenerator().generateContentStream(
|
||||
{
|
||||
model: modelToUse,
|
||||
model,
|
||||
contents: requestContents,
|
||||
config: { ...this.generationConfig, ...params.config },
|
||||
},
|
||||
prompt_id,
|
||||
);
|
||||
};
|
||||
|
||||
const onPersistent429Callback = async (
|
||||
authType?: string,
|
||||
error?: unknown,
|
||||
|
||||
@@ -47,7 +47,7 @@ describe('executeToolCall', () => {
|
||||
getDebugMode: () => false,
|
||||
getContentGeneratorConfig: () => ({
|
||||
model: 'test-model',
|
||||
authType: 'gemini-api-key',
|
||||
authType: 'gemini',
|
||||
}),
|
||||
getShellExecutionConfig: () => ({
|
||||
terminalWidth: 90,
|
||||
|
||||
@@ -207,6 +207,27 @@ describe('OpenAIContentConverter', () => {
|
||||
expect.objectContaining({ text: 'visible text' }),
|
||||
);
|
||||
});
|
||||
|
||||
it('should not throw when streaming chunk has no delta', () => {
|
||||
const chunk = converter.convertOpenAIChunkToGemini({
|
||||
object: 'chat.completion.chunk',
|
||||
id: 'chunk-2',
|
||||
created: 456,
|
||||
choices: [
|
||||
{
|
||||
index: 0,
|
||||
// Some OpenAI-compatible providers may omit delta entirely.
|
||||
delta: undefined,
|
||||
finish_reason: null,
|
||||
logprobs: null,
|
||||
},
|
||||
],
|
||||
model: 'gpt-test',
|
||||
} as unknown as OpenAI.Chat.ChatCompletionChunk);
|
||||
|
||||
const parts = chunk.candidates?.[0]?.content?.parts;
|
||||
expect(parts).toEqual([]);
|
||||
});
|
||||
});
|
||||
|
||||
describe('convertGeminiToolsToOpenAI', () => {
|
||||
|
||||
@@ -93,6 +93,14 @@ export class OpenAIContentConverter {
|
||||
this.schemaCompliance = schemaCompliance;
|
||||
}
|
||||
|
||||
/**
|
||||
* Update the model used for response metadata (modelVersion/logging) and any
|
||||
* model-specific conversion behavior.
|
||||
*/
|
||||
setModel(model: string): void {
|
||||
this.model = model;
|
||||
}
|
||||
|
||||
/**
|
||||
* Reset streaming tool calls parser for new stream processing
|
||||
* This should be called at the beginning of each stream to prevent
|
||||
@@ -791,7 +799,7 @@ export class OpenAIContentConverter {
|
||||
const parts: Part[] = [];
|
||||
|
||||
const reasoningText = (choice.delta as ExtendedCompletionChunkDelta)
|
||||
.reasoning_content;
|
||||
?.reasoning_content;
|
||||
if (reasoningText) {
|
||||
parts.push({ text: reasoningText, thought: true });
|
||||
}
|
||||
|
||||
@@ -46,6 +46,7 @@ describe('ContentGenerationPipeline', () => {
|
||||
|
||||
// Mock converter
|
||||
mockConverter = {
|
||||
setModel: vi.fn(),
|
||||
convertGeminiRequestToOpenAI: vi.fn(),
|
||||
convertOpenAIResponseToGemini: vi.fn(),
|
||||
convertOpenAIChunkToGemini: vi.fn(),
|
||||
@@ -99,6 +100,7 @@ describe('ContentGenerationPipeline', () => {
|
||||
describe('constructor', () => {
|
||||
it('should initialize with correct configuration', () => {
|
||||
expect(mockProvider.buildClient).toHaveBeenCalled();
|
||||
// Converter is constructed once and the model is updated per-request via setModel().
|
||||
expect(OpenAIContentConverter).toHaveBeenCalledWith(
|
||||
'test-model',
|
||||
undefined,
|
||||
@@ -144,6 +146,9 @@ describe('ContentGenerationPipeline', () => {
|
||||
|
||||
// Assert
|
||||
expect(result).toBe(mockGeminiResponse);
|
||||
expect(
|
||||
(mockConverter as unknown as { setModel: Mock }).setModel,
|
||||
).toHaveBeenCalledWith('test-model');
|
||||
expect(mockConverter.convertGeminiRequestToOpenAI).toHaveBeenCalledWith(
|
||||
request,
|
||||
);
|
||||
@@ -164,6 +169,53 @@ describe('ContentGenerationPipeline', () => {
|
||||
);
|
||||
});
|
||||
|
||||
it('should ignore request.model override and always use configured model', async () => {
|
||||
// Arrange
|
||||
const request: GenerateContentParameters = {
|
||||
model: 'override-model',
|
||||
contents: [{ parts: [{ text: 'Hello' }], role: 'user' }],
|
||||
};
|
||||
const userPromptId = 'test-prompt-id';
|
||||
|
||||
const mockMessages = [
|
||||
{ role: 'user', content: 'Hello' },
|
||||
] as OpenAI.Chat.ChatCompletionMessageParam[];
|
||||
const mockOpenAIResponse = {
|
||||
id: 'response-id',
|
||||
choices: [
|
||||
{ message: { content: 'Hello response' }, finish_reason: 'stop' },
|
||||
],
|
||||
created: Date.now(),
|
||||
model: 'override-model',
|
||||
} as OpenAI.Chat.ChatCompletion;
|
||||
const mockGeminiResponse = new GenerateContentResponse();
|
||||
|
||||
(mockConverter.convertGeminiRequestToOpenAI as Mock).mockReturnValue(
|
||||
mockMessages,
|
||||
);
|
||||
(mockConverter.convertOpenAIResponseToGemini as Mock).mockReturnValue(
|
||||
mockGeminiResponse,
|
||||
);
|
||||
(mockClient.chat.completions.create as Mock).mockResolvedValue(
|
||||
mockOpenAIResponse,
|
||||
);
|
||||
|
||||
// Act
|
||||
const result = await pipeline.execute(request, userPromptId);
|
||||
|
||||
// Assert
|
||||
expect(result).toBe(mockGeminiResponse);
|
||||
expect(
|
||||
(mockConverter as unknown as { setModel: Mock }).setModel,
|
||||
).toHaveBeenCalledWith('test-model');
|
||||
expect(mockClient.chat.completions.create).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
model: 'test-model',
|
||||
}),
|
||||
expect.any(Object),
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle tools in request', async () => {
|
||||
// Arrange
|
||||
const request: GenerateContentParameters = {
|
||||
@@ -217,6 +269,9 @@ describe('ContentGenerationPipeline', () => {
|
||||
|
||||
// Assert
|
||||
expect(result).toBe(mockGeminiResponse);
|
||||
expect(
|
||||
(mockConverter as unknown as { setModel: Mock }).setModel,
|
||||
).toHaveBeenCalledWith('test-model');
|
||||
expect(mockConverter.convertGeminiToolsToOpenAI).toHaveBeenCalledWith(
|
||||
request.config!.tools,
|
||||
);
|
||||
|
||||
@@ -40,10 +40,16 @@ export class ContentGenerationPipeline {
|
||||
request: GenerateContentParameters,
|
||||
userPromptId: string,
|
||||
): Promise<GenerateContentResponse> {
|
||||
// For OpenAI-compatible providers, the configured model is the single source of truth.
|
||||
// We intentionally ignore request.model because upstream callers may pass a model string
|
||||
// that is not valid/available for the OpenAI-compatible backend.
|
||||
const effectiveModel = this.contentGeneratorConfig.model;
|
||||
this.converter.setModel(effectiveModel);
|
||||
return this.executeWithErrorHandling(
|
||||
request,
|
||||
userPromptId,
|
||||
false,
|
||||
effectiveModel,
|
||||
async (openaiRequest) => {
|
||||
const openaiResponse = (await this.client.chat.completions.create(
|
||||
openaiRequest,
|
||||
@@ -64,10 +70,13 @@ export class ContentGenerationPipeline {
|
||||
request: GenerateContentParameters,
|
||||
userPromptId: string,
|
||||
): Promise<AsyncGenerator<GenerateContentResponse>> {
|
||||
const effectiveModel = this.contentGeneratorConfig.model;
|
||||
this.converter.setModel(effectiveModel);
|
||||
return this.executeWithErrorHandling(
|
||||
request,
|
||||
userPromptId,
|
||||
true,
|
||||
effectiveModel,
|
||||
async (openaiRequest, context) => {
|
||||
// Stage 1: Create OpenAI stream
|
||||
const stream = (await this.client.chat.completions.create(
|
||||
@@ -224,12 +233,13 @@ export class ContentGenerationPipeline {
|
||||
request: GenerateContentParameters,
|
||||
userPromptId: string,
|
||||
streaming: boolean = false,
|
||||
effectiveModel: string,
|
||||
): Promise<OpenAI.Chat.ChatCompletionCreateParams> {
|
||||
const messages = this.converter.convertGeminiRequestToOpenAI(request);
|
||||
|
||||
// Apply provider-specific enhancements
|
||||
const baseRequest: OpenAI.Chat.ChatCompletionCreateParams = {
|
||||
model: this.contentGeneratorConfig.model,
|
||||
model: effectiveModel,
|
||||
messages,
|
||||
...this.buildGenerateContentConfig(request),
|
||||
};
|
||||
@@ -342,18 +352,24 @@ export class ContentGenerationPipeline {
|
||||
request: GenerateContentParameters,
|
||||
userPromptId: string,
|
||||
isStreaming: boolean,
|
||||
effectiveModel: string,
|
||||
executor: (
|
||||
openaiRequest: OpenAI.Chat.ChatCompletionCreateParams,
|
||||
context: RequestContext,
|
||||
) => Promise<T>,
|
||||
): Promise<T> {
|
||||
const context = this.createRequestContext(userPromptId, isStreaming);
|
||||
const context = this.createRequestContext(
|
||||
userPromptId,
|
||||
isStreaming,
|
||||
effectiveModel,
|
||||
);
|
||||
|
||||
try {
|
||||
const openaiRequest = await this.buildRequest(
|
||||
request,
|
||||
userPromptId,
|
||||
isStreaming,
|
||||
effectiveModel,
|
||||
);
|
||||
|
||||
const result = await executor(openaiRequest, context);
|
||||
@@ -385,10 +401,11 @@ export class ContentGenerationPipeline {
|
||||
private createRequestContext(
|
||||
userPromptId: string,
|
||||
isStreaming: boolean,
|
||||
effectiveModel: string,
|
||||
): RequestContext {
|
||||
return {
|
||||
userPromptId,
|
||||
model: this.contentGeneratorConfig.model,
|
||||
model: effectiveModel,
|
||||
authType: this.contentGeneratorConfig.authType || 'unknown',
|
||||
startTime: Date.now(),
|
||||
duration: 0,
|
||||
|
||||
@@ -1,23 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Google LLC
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
/**
|
||||
* Defines the intent returned by the UI layer during a fallback scenario.
|
||||
*/
|
||||
export type FallbackIntent =
|
||||
| 'retry' // Immediately retry the current request with the fallback model.
|
||||
| 'stop' // Switch to fallback for future requests, but stop the current request.
|
||||
| 'auth'; // Stop the current request; user intends to change authentication.
|
||||
|
||||
/**
|
||||
* The interface for the handler provided by the UI layer (e.g., the CLI)
|
||||
* to interact with the user during a fallback scenario.
|
||||
*/
|
||||
export type FallbackModelHandler = (
|
||||
failedModel: string,
|
||||
fallbackModel: string,
|
||||
error?: unknown,
|
||||
) => Promise<FallbackIntent | null>;
|
||||
@@ -9,6 +9,30 @@ export * from './config/config.js';
|
||||
export * from './output/types.js';
|
||||
export * from './output/json-formatter.js';
|
||||
|
||||
// Export models
|
||||
export {
|
||||
type ModelCapabilities,
|
||||
type ModelGenerationConfig,
|
||||
type ModelConfig as ProviderModelConfig,
|
||||
type ModelProvidersConfig,
|
||||
type ResolvedModelConfig,
|
||||
type AvailableModel,
|
||||
type ModelSwitchMetadata,
|
||||
QWEN_OAUTH_MODELS,
|
||||
ModelRegistry,
|
||||
ModelsConfig,
|
||||
type ModelsConfigOptions,
|
||||
type OnModelChangeCallback,
|
||||
// Model configuration resolver
|
||||
resolveModelConfig,
|
||||
validateModelConfig,
|
||||
type ModelConfigSourcesInput,
|
||||
type ModelConfigCliInput,
|
||||
type ModelConfigSettingsInput,
|
||||
type ModelConfigResolutionResult,
|
||||
type ModelConfigValidationResult,
|
||||
} from './models/index.js';
|
||||
|
||||
// Export Core Logic
|
||||
export * from './core/client.js';
|
||||
export * from './core/contentGenerator.js';
|
||||
@@ -21,8 +45,6 @@ export * from './core/geminiRequest.js';
|
||||
export * from './core/coreToolScheduler.js';
|
||||
export * from './core/nonInteractiveToolExecutor.js';
|
||||
|
||||
export * from './fallback/types.js';
|
||||
|
||||
export * from './qwen/qwenOAuth2.js';
|
||||
|
||||
// Export utilities
|
||||
@@ -55,6 +77,9 @@ export * from './utils/projectSummary.js';
|
||||
export * from './utils/promptIdContext.js';
|
||||
export * from './utils/thoughtUtils.js';
|
||||
|
||||
// Config resolution utilities
|
||||
export * from './utils/configResolver.js';
|
||||
|
||||
// Export services
|
||||
export * from './services/fileDiscoveryService.js';
|
||||
export * from './services/gitService.js';
|
||||
|
||||
134
packages/core/src/models/constants.ts
Normal file
134
packages/core/src/models/constants.ts
Normal file
@@ -0,0 +1,134 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { DEFAULT_QWEN_MODEL } from '../config/models.js';
|
||||
|
||||
import type { ModelConfig } from './types.js';
|
||||
|
||||
type AuthType = import('../core/contentGenerator.js').AuthType;
|
||||
type ContentGeneratorConfig =
|
||||
import('../core/contentGenerator.js').ContentGeneratorConfig;
|
||||
|
||||
/**
|
||||
* Field keys for model-scoped generation config.
|
||||
*
|
||||
* Kept in a small standalone module to avoid circular deps. The `import('...')`
|
||||
* usage is type-only and does not emit runtime imports.
|
||||
*/
|
||||
export const MODEL_GENERATION_CONFIG_FIELDS = [
|
||||
'samplingParams',
|
||||
'timeout',
|
||||
'maxRetries',
|
||||
'disableCacheControl',
|
||||
'schemaCompliance',
|
||||
'reasoning',
|
||||
] as const satisfies ReadonlyArray<keyof ContentGeneratorConfig>;
|
||||
|
||||
/**
|
||||
* Credential-related fields that are part of ContentGeneratorConfig
|
||||
* but not ModelGenerationConfig.
|
||||
*/
|
||||
export const CREDENTIAL_FIELDS = [
|
||||
'model',
|
||||
'apiKey',
|
||||
'apiKeyEnvKey',
|
||||
'baseUrl',
|
||||
] as const satisfies ReadonlyArray<keyof ContentGeneratorConfig>;
|
||||
|
||||
/**
|
||||
* All provider-sourced fields that need to be tracked for source attribution
|
||||
* and cleared when switching from provider to manual credentials.
|
||||
*/
|
||||
export const PROVIDER_SOURCED_FIELDS = [
|
||||
...CREDENTIAL_FIELDS,
|
||||
...MODEL_GENERATION_CONFIG_FIELDS,
|
||||
] as const;
|
||||
|
||||
/**
|
||||
* Environment variable mappings per authType.
|
||||
*/
|
||||
export interface AuthEnvMapping {
|
||||
apiKey: string[];
|
||||
baseUrl: string[];
|
||||
model: string[];
|
||||
}
|
||||
|
||||
export const AUTH_ENV_MAPPINGS = {
|
||||
openai: {
|
||||
apiKey: ['OPENAI_API_KEY'],
|
||||
baseUrl: ['OPENAI_BASE_URL'],
|
||||
model: ['OPENAI_MODEL', 'QWEN_MODEL'],
|
||||
},
|
||||
anthropic: {
|
||||
apiKey: ['ANTHROPIC_API_KEY'],
|
||||
baseUrl: ['ANTHROPIC_BASE_URL'],
|
||||
model: ['ANTHROPIC_MODEL'],
|
||||
},
|
||||
gemini: {
|
||||
apiKey: ['GEMINI_API_KEY'],
|
||||
baseUrl: [],
|
||||
model: ['GEMINI_MODEL'],
|
||||
},
|
||||
'vertex-ai': {
|
||||
apiKey: ['GOOGLE_API_KEY'],
|
||||
baseUrl: [],
|
||||
model: ['GOOGLE_MODEL'],
|
||||
},
|
||||
'qwen-oauth': {
|
||||
apiKey: [],
|
||||
baseUrl: [],
|
||||
model: [],
|
||||
},
|
||||
} as const satisfies Record<AuthType, AuthEnvMapping>;
|
||||
|
||||
export const DEFAULT_MODELS = {
|
||||
openai: 'qwen3-coder-plus',
|
||||
'qwen-oauth': DEFAULT_QWEN_MODEL,
|
||||
} as Partial<Record<AuthType, string>>;
|
||||
|
||||
export const QWEN_OAUTH_ALLOWED_MODELS = [
|
||||
DEFAULT_QWEN_MODEL,
|
||||
'vision-model',
|
||||
] as const;
|
||||
|
||||
/**
|
||||
* Hard-coded Qwen OAuth models that are always available.
|
||||
* These cannot be overridden by user configuration.
|
||||
*/
|
||||
export const QWEN_OAUTH_MODELS: ModelConfig[] = [
|
||||
{
|
||||
id: 'coder-model',
|
||||
name: 'Qwen Coder',
|
||||
description:
|
||||
'The latest Qwen Coder model from Alibaba Cloud ModelStudio (version: qwen3-coder-plus-2025-09-23)',
|
||||
capabilities: { vision: false },
|
||||
generationConfig: {
|
||||
samplingParams: {
|
||||
temperature: 0.7,
|
||||
top_p: 0.9,
|
||||
max_tokens: 8192,
|
||||
},
|
||||
timeout: 60000,
|
||||
maxRetries: 3,
|
||||
},
|
||||
},
|
||||
{
|
||||
id: 'vision-model',
|
||||
name: 'Qwen Vision',
|
||||
description:
|
||||
'The latest Qwen Vision model from Alibaba Cloud ModelStudio (version: qwen3-vl-plus-2025-09-23)',
|
||||
capabilities: { vision: true },
|
||||
generationConfig: {
|
||||
samplingParams: {
|
||||
temperature: 0.7,
|
||||
top_p: 0.9,
|
||||
max_tokens: 8192,
|
||||
},
|
||||
timeout: 60000,
|
||||
maxRetries: 3,
|
||||
},
|
||||
},
|
||||
];
|
||||
44
packages/core/src/models/index.ts
Normal file
44
packages/core/src/models/index.ts
Normal file
@@ -0,0 +1,44 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
export {
|
||||
type ModelCapabilities,
|
||||
type ModelGenerationConfig,
|
||||
type ModelConfig,
|
||||
type ModelProvidersConfig,
|
||||
type ResolvedModelConfig,
|
||||
type AvailableModel,
|
||||
type ModelSwitchMetadata,
|
||||
} from './types.js';
|
||||
|
||||
export { ModelRegistry } from './modelRegistry.js';
|
||||
|
||||
export {
|
||||
ModelsConfig,
|
||||
type ModelsConfigOptions,
|
||||
type OnModelChangeCallback,
|
||||
} from './modelsConfig.js';
|
||||
|
||||
export {
|
||||
AUTH_ENV_MAPPINGS,
|
||||
CREDENTIAL_FIELDS,
|
||||
DEFAULT_MODELS,
|
||||
MODEL_GENERATION_CONFIG_FIELDS,
|
||||
PROVIDER_SOURCED_FIELDS,
|
||||
QWEN_OAUTH_ALLOWED_MODELS,
|
||||
QWEN_OAUTH_MODELS,
|
||||
} from './constants.js';
|
||||
|
||||
// Model configuration resolver
|
||||
export {
|
||||
resolveModelConfig,
|
||||
validateModelConfig,
|
||||
type ModelConfigSourcesInput,
|
||||
type ModelConfigCliInput,
|
||||
type ModelConfigSettingsInput,
|
||||
type ModelConfigResolutionResult,
|
||||
type ModelConfigValidationResult,
|
||||
} from './modelConfigResolver.js';
|
||||
125
packages/core/src/models/modelConfigErrors.ts
Normal file
125
packages/core/src/models/modelConfigErrors.ts
Normal file
@@ -0,0 +1,125 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
export function getDefaultApiKeyEnvVar(authType: string | undefined): string {
|
||||
switch (authType) {
|
||||
case 'openai':
|
||||
return 'OPENAI_API_KEY';
|
||||
case 'anthropic':
|
||||
return 'ANTHROPIC_API_KEY';
|
||||
case 'gemini':
|
||||
return 'GEMINI_API_KEY';
|
||||
case 'vertex-ai':
|
||||
return 'GOOGLE_API_KEY';
|
||||
default:
|
||||
return 'API_KEY';
|
||||
}
|
||||
}
|
||||
|
||||
export function getDefaultModelEnvVar(authType: string | undefined): string {
|
||||
switch (authType) {
|
||||
case 'openai':
|
||||
return 'OPENAI_MODEL';
|
||||
case 'anthropic':
|
||||
return 'ANTHROPIC_MODEL';
|
||||
case 'gemini':
|
||||
return 'GEMINI_MODEL';
|
||||
case 'vertex-ai':
|
||||
return 'GOOGLE_MODEL';
|
||||
default:
|
||||
return 'MODEL';
|
||||
}
|
||||
}
|
||||
|
||||
export abstract class ModelConfigError extends Error {
|
||||
abstract readonly code: string;
|
||||
|
||||
protected constructor(message: string) {
|
||||
super(message);
|
||||
this.name = new.target.name;
|
||||
Object.setPrototypeOf(this, new.target.prototype);
|
||||
}
|
||||
}
|
||||
|
||||
export class StrictMissingCredentialsError extends ModelConfigError {
|
||||
readonly code = 'STRICT_MISSING_CREDENTIALS';
|
||||
|
||||
constructor(
|
||||
authType: string | undefined,
|
||||
model: string | undefined,
|
||||
envKey?: string,
|
||||
) {
|
||||
const providerKey = authType || '(unknown)';
|
||||
const modelName = model || '(unknown)';
|
||||
super(
|
||||
`Missing credentials for modelProviders model '${modelName}'. ` +
|
||||
(envKey
|
||||
? `Current configured envKey: '${envKey}'. Set that environment variable, or update modelProviders.${providerKey}[].envKey.`
|
||||
: `Configure modelProviders.${providerKey}[].envKey and set that environment variable.`),
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
export class StrictMissingModelIdError extends ModelConfigError {
|
||||
readonly code = 'STRICT_MISSING_MODEL_ID';
|
||||
|
||||
constructor(authType: string | undefined) {
|
||||
super(
|
||||
`Missing model id for strict modelProviders resolution (authType: ${authType}).`,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
export class MissingApiKeyError extends ModelConfigError {
|
||||
readonly code = 'MISSING_API_KEY';
|
||||
|
||||
constructor(params: {
|
||||
authType: string | undefined;
|
||||
model: string | undefined;
|
||||
baseUrl: string | undefined;
|
||||
envKey: string;
|
||||
}) {
|
||||
super(
|
||||
`Missing API key for ${params.authType} auth. ` +
|
||||
`Current model: '${params.model || '(unknown)'}', baseUrl: '${params.baseUrl || '(default)'}'. ` +
|
||||
`Provide an API key via settings (security.auth.apiKey), ` +
|
||||
`or set the environment variable '${params.envKey}'.`,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
export class MissingModelError extends ModelConfigError {
|
||||
readonly code = 'MISSING_MODEL';
|
||||
|
||||
constructor(params: { authType: string | undefined; envKey: string }) {
|
||||
super(
|
||||
`Missing model for ${params.authType} auth. ` +
|
||||
`Set the environment variable '${params.envKey}'.`,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
export class MissingBaseUrlError extends ModelConfigError {
|
||||
readonly code = 'MISSING_BASE_URL';
|
||||
|
||||
constructor(params: {
|
||||
authType: string | undefined;
|
||||
model: string | undefined;
|
||||
}) {
|
||||
super(
|
||||
`Missing baseUrl for modelProviders model '${params.model || '(unknown)'}'. ` +
|
||||
`Configure modelProviders.${params.authType || '(unknown)'}[].baseUrl.`,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
export class MissingAnthropicBaseUrlEnvError extends ModelConfigError {
|
||||
readonly code = 'MISSING_ANTHROPIC_BASE_URL_ENV';
|
||||
|
||||
constructor() {
|
||||
super('ANTHROPIC_BASE_URL environment variable not found.');
|
||||
}
|
||||
}
|
||||
355
packages/core/src/models/modelConfigResolver.test.ts
Normal file
355
packages/core/src/models/modelConfigResolver.test.ts
Normal file
@@ -0,0 +1,355 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { describe, it, expect } from 'vitest';
|
||||
import {
|
||||
resolveModelConfig,
|
||||
validateModelConfig,
|
||||
} from './modelConfigResolver.js';
|
||||
import { AuthType } from '../core/contentGenerator.js';
|
||||
import { DEFAULT_QWEN_MODEL } from '../config/models.js';
|
||||
|
||||
describe('modelConfigResolver', () => {
|
||||
describe('resolveModelConfig', () => {
|
||||
describe('OpenAI auth type', () => {
|
||||
it('resolves from CLI with highest priority', () => {
|
||||
const result = resolveModelConfig({
|
||||
authType: AuthType.USE_OPENAI,
|
||||
cli: {
|
||||
model: 'cli-model',
|
||||
apiKey: 'cli-key',
|
||||
baseUrl: 'https://cli.example.com',
|
||||
},
|
||||
settings: {
|
||||
model: 'settings-model',
|
||||
apiKey: 'settings-key',
|
||||
baseUrl: 'https://settings.example.com',
|
||||
},
|
||||
env: {
|
||||
OPENAI_MODEL: 'env-model',
|
||||
OPENAI_API_KEY: 'env-key',
|
||||
OPENAI_BASE_URL: 'https://env.example.com',
|
||||
},
|
||||
});
|
||||
|
||||
expect(result.config.model).toBe('cli-model');
|
||||
expect(result.config.apiKey).toBe('cli-key');
|
||||
expect(result.config.baseUrl).toBe('https://cli.example.com');
|
||||
|
||||
expect(result.sources['model'].kind).toBe('cli');
|
||||
expect(result.sources['apiKey'].kind).toBe('cli');
|
||||
expect(result.sources['baseUrl'].kind).toBe('cli');
|
||||
});
|
||||
|
||||
it('falls back to env when CLI not provided', () => {
|
||||
const result = resolveModelConfig({
|
||||
authType: AuthType.USE_OPENAI,
|
||||
cli: {},
|
||||
settings: {
|
||||
model: 'settings-model',
|
||||
},
|
||||
env: {
|
||||
OPENAI_MODEL: 'env-model',
|
||||
OPENAI_API_KEY: 'env-key',
|
||||
},
|
||||
});
|
||||
|
||||
expect(result.config.model).toBe('env-model');
|
||||
expect(result.config.apiKey).toBe('env-key');
|
||||
|
||||
expect(result.sources['model'].kind).toBe('env');
|
||||
expect(result.sources['apiKey'].kind).toBe('env');
|
||||
});
|
||||
|
||||
it('falls back to settings when env not provided', () => {
|
||||
const result = resolveModelConfig({
|
||||
authType: AuthType.USE_OPENAI,
|
||||
cli: {},
|
||||
settings: {
|
||||
model: 'settings-model',
|
||||
apiKey: 'settings-key',
|
||||
baseUrl: 'https://settings.example.com',
|
||||
},
|
||||
env: {},
|
||||
});
|
||||
|
||||
expect(result.config.model).toBe('settings-model');
|
||||
expect(result.config.apiKey).toBe('settings-key');
|
||||
expect(result.config.baseUrl).toBe('https://settings.example.com');
|
||||
|
||||
expect(result.sources['model'].kind).toBe('settings');
|
||||
expect(result.sources['apiKey'].kind).toBe('settings');
|
||||
expect(result.sources['baseUrl'].kind).toBe('settings');
|
||||
});
|
||||
|
||||
it('uses default model when nothing provided', () => {
|
||||
const result = resolveModelConfig({
|
||||
authType: AuthType.USE_OPENAI,
|
||||
cli: {},
|
||||
settings: {},
|
||||
env: {
|
||||
OPENAI_API_KEY: 'some-key', // need key to be valid
|
||||
},
|
||||
});
|
||||
|
||||
expect(result.config.model).toBe('qwen3-coder-plus');
|
||||
expect(result.sources['model'].kind).toBe('default');
|
||||
});
|
||||
|
||||
it('prioritizes modelProvider over CLI', () => {
|
||||
const result = resolveModelConfig({
|
||||
authType: AuthType.USE_OPENAI,
|
||||
cli: {
|
||||
model: 'cli-model',
|
||||
},
|
||||
settings: {},
|
||||
env: {
|
||||
MY_CUSTOM_KEY: 'provider-key',
|
||||
},
|
||||
modelProvider: {
|
||||
id: 'provider-model',
|
||||
name: 'Provider Model',
|
||||
authType: AuthType.USE_OPENAI,
|
||||
envKey: 'MY_CUSTOM_KEY',
|
||||
baseUrl: 'https://provider.example.com',
|
||||
generationConfig: {},
|
||||
capabilities: {},
|
||||
},
|
||||
});
|
||||
|
||||
expect(result.config.model).toBe('provider-model');
|
||||
expect(result.config.apiKey).toBe('provider-key');
|
||||
expect(result.config.baseUrl).toBe('https://provider.example.com');
|
||||
|
||||
expect(result.sources['model'].kind).toBe('modelProviders');
|
||||
expect(result.sources['apiKey'].kind).toBe('env');
|
||||
expect(result.sources['apiKey'].via?.kind).toBe('modelProviders');
|
||||
});
|
||||
|
||||
it('reads QWEN_MODEL as fallback for OPENAI_MODEL', () => {
|
||||
const result = resolveModelConfig({
|
||||
authType: AuthType.USE_OPENAI,
|
||||
cli: {},
|
||||
settings: {},
|
||||
env: {
|
||||
QWEN_MODEL: 'qwen-model',
|
||||
OPENAI_API_KEY: 'key',
|
||||
},
|
||||
});
|
||||
|
||||
expect(result.config.model).toBe('qwen-model');
|
||||
expect(result.sources['model'].envKey).toBe('QWEN_MODEL');
|
||||
});
|
||||
});
|
||||
|
||||
describe('Qwen OAuth auth type', () => {
|
||||
it('uses default model for Qwen OAuth', () => {
|
||||
const result = resolveModelConfig({
|
||||
authType: AuthType.QWEN_OAUTH,
|
||||
cli: {},
|
||||
settings: {},
|
||||
env: {},
|
||||
});
|
||||
|
||||
expect(result.config.model).toBe(DEFAULT_QWEN_MODEL);
|
||||
expect(result.config.apiKey).toBe('QWEN_OAUTH_DYNAMIC_TOKEN');
|
||||
expect(result.sources['apiKey'].kind).toBe('computed');
|
||||
});
|
||||
|
||||
it('allows vision-model for Qwen OAuth', () => {
|
||||
const result = resolveModelConfig({
|
||||
authType: AuthType.QWEN_OAUTH,
|
||||
cli: {
|
||||
model: 'vision-model',
|
||||
},
|
||||
settings: {},
|
||||
env: {},
|
||||
});
|
||||
|
||||
expect(result.config.model).toBe('vision-model');
|
||||
expect(result.sources['model'].kind).toBe('cli');
|
||||
});
|
||||
|
||||
it('warns and falls back for unsupported Qwen OAuth models', () => {
|
||||
const result = resolveModelConfig({
|
||||
authType: AuthType.QWEN_OAUTH,
|
||||
cli: {
|
||||
model: 'unsupported-model',
|
||||
},
|
||||
settings: {},
|
||||
env: {},
|
||||
});
|
||||
|
||||
expect(result.config.model).toBe(DEFAULT_QWEN_MODEL);
|
||||
expect(result.warnings).toHaveLength(1);
|
||||
expect(result.warnings[0]).toContain('unsupported-model');
|
||||
});
|
||||
});
|
||||
|
||||
describe('Anthropic auth type', () => {
|
||||
it('resolves Anthropic config from env', () => {
|
||||
const result = resolveModelConfig({
|
||||
authType: AuthType.USE_ANTHROPIC,
|
||||
cli: {},
|
||||
settings: {},
|
||||
env: {
|
||||
ANTHROPIC_API_KEY: 'anthropic-key',
|
||||
ANTHROPIC_BASE_URL: 'https://anthropic.example.com',
|
||||
ANTHROPIC_MODEL: 'claude-3',
|
||||
},
|
||||
});
|
||||
|
||||
expect(result.config.model).toBe('claude-3');
|
||||
expect(result.config.apiKey).toBe('anthropic-key');
|
||||
expect(result.config.baseUrl).toBe('https://anthropic.example.com');
|
||||
});
|
||||
});
|
||||
|
||||
describe('generation config resolution', () => {
|
||||
it('merges generation config from settings', () => {
|
||||
const result = resolveModelConfig({
|
||||
authType: AuthType.USE_OPENAI,
|
||||
cli: {},
|
||||
settings: {
|
||||
apiKey: 'key',
|
||||
generationConfig: {
|
||||
timeout: 60000,
|
||||
maxRetries: 5,
|
||||
samplingParams: {
|
||||
temperature: 0.7,
|
||||
},
|
||||
},
|
||||
},
|
||||
env: {},
|
||||
});
|
||||
|
||||
expect(result.config.timeout).toBe(60000);
|
||||
expect(result.config.maxRetries).toBe(5);
|
||||
expect(result.config.samplingParams?.temperature).toBe(0.7);
|
||||
|
||||
expect(result.sources['timeout'].kind).toBe('settings');
|
||||
expect(result.sources['samplingParams'].kind).toBe('settings');
|
||||
});
|
||||
|
||||
it('modelProvider config overrides settings', () => {
|
||||
const result = resolveModelConfig({
|
||||
authType: AuthType.USE_OPENAI,
|
||||
cli: {},
|
||||
settings: {
|
||||
generationConfig: {
|
||||
timeout: 30000,
|
||||
},
|
||||
},
|
||||
env: {
|
||||
MY_KEY: 'key',
|
||||
},
|
||||
modelProvider: {
|
||||
id: 'model',
|
||||
name: 'Model',
|
||||
authType: AuthType.USE_OPENAI,
|
||||
envKey: 'MY_KEY',
|
||||
baseUrl: 'https://api.example.com',
|
||||
generationConfig: {
|
||||
timeout: 60000,
|
||||
},
|
||||
capabilities: {},
|
||||
},
|
||||
});
|
||||
|
||||
expect(result.config.timeout).toBe(60000);
|
||||
expect(result.sources['timeout'].kind).toBe('modelProviders');
|
||||
});
|
||||
});
|
||||
|
||||
describe('proxy handling', () => {
|
||||
it('includes proxy in config when provided', () => {
|
||||
const result = resolveModelConfig({
|
||||
authType: AuthType.USE_OPENAI,
|
||||
cli: {},
|
||||
settings: { apiKey: 'key' },
|
||||
env: {},
|
||||
proxy: 'http://proxy.example.com:8080',
|
||||
});
|
||||
|
||||
expect(result.config.proxy).toBe('http://proxy.example.com:8080');
|
||||
expect(result.sources['proxy'].kind).toBe('computed');
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('validateModelConfig', () => {
|
||||
it('passes for valid OpenAI config', () => {
|
||||
const result = validateModelConfig({
|
||||
authType: AuthType.USE_OPENAI,
|
||||
model: 'gpt-4',
|
||||
apiKey: 'sk-xxx',
|
||||
});
|
||||
|
||||
expect(result.valid).toBe(true);
|
||||
expect(result.errors).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('fails when API key missing', () => {
|
||||
const result = validateModelConfig({
|
||||
authType: AuthType.USE_OPENAI,
|
||||
model: 'gpt-4',
|
||||
});
|
||||
|
||||
expect(result.valid).toBe(false);
|
||||
expect(result.errors).toHaveLength(1);
|
||||
expect(result.errors[0].message).toContain('Missing API key');
|
||||
});
|
||||
|
||||
it('fails when model missing', () => {
|
||||
const result = validateModelConfig({
|
||||
authType: AuthType.USE_OPENAI,
|
||||
model: '',
|
||||
apiKey: 'sk-xxx',
|
||||
});
|
||||
|
||||
expect(result.valid).toBe(false);
|
||||
expect(result.errors).toHaveLength(1);
|
||||
expect(result.errors[0].message).toContain('Missing model');
|
||||
});
|
||||
|
||||
it('always passes for Qwen OAuth', () => {
|
||||
const result = validateModelConfig({
|
||||
authType: AuthType.QWEN_OAUTH,
|
||||
model: DEFAULT_QWEN_MODEL,
|
||||
apiKey: 'QWEN_OAUTH_DYNAMIC_TOKEN',
|
||||
});
|
||||
|
||||
expect(result.valid).toBe(true);
|
||||
});
|
||||
|
||||
it('requires baseUrl for Anthropic', () => {
|
||||
const result = validateModelConfig({
|
||||
authType: AuthType.USE_ANTHROPIC,
|
||||
model: 'claude-3',
|
||||
apiKey: 'key',
|
||||
// missing baseUrl
|
||||
});
|
||||
|
||||
expect(result.valid).toBe(false);
|
||||
expect(result.errors[0].message).toContain('ANTHROPIC_BASE_URL');
|
||||
});
|
||||
|
||||
it('uses strict error messages for modelProvider', () => {
|
||||
const result = validateModelConfig(
|
||||
{
|
||||
authType: AuthType.USE_OPENAI,
|
||||
model: 'my-model',
|
||||
// missing apiKey
|
||||
},
|
||||
true, // isStrictModelProvider
|
||||
);
|
||||
|
||||
expect(result.valid).toBe(false);
|
||||
expect(result.errors[0].message).toContain('modelProviders');
|
||||
expect(result.errors[0].message).toContain('envKey');
|
||||
});
|
||||
});
|
||||
});
|
||||
364
packages/core/src/models/modelConfigResolver.ts
Normal file
364
packages/core/src/models/modelConfigResolver.ts
Normal file
@@ -0,0 +1,364 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
/**
|
||||
* ModelConfigResolver - Unified resolver for model-related configuration.
|
||||
*
|
||||
* This module consolidates all model configuration resolution logic,
|
||||
* eliminating duplicate code between CLI and Core layers.
|
||||
*
|
||||
* Configuration priority (highest to lowest):
|
||||
* 1. modelProvider - Explicit selection from ModelProviders config
|
||||
* 2. CLI arguments - Command line flags (--model, --openaiApiKey, etc.)
|
||||
* 3. Environment variables - OPENAI_API_KEY, OPENAI_MODEL, etc.
|
||||
* 4. Settings - User/workspace settings file
|
||||
* 5. Defaults - Built-in default values
|
||||
*/
|
||||
|
||||
import { AuthType } from '../core/contentGenerator.js';
|
||||
import type { ContentGeneratorConfig } from '../core/contentGenerator.js';
|
||||
import { DEFAULT_QWEN_MODEL } from '../config/models.js';
|
||||
import {
|
||||
resolveField,
|
||||
resolveOptionalField,
|
||||
layer,
|
||||
envLayer,
|
||||
cliSource,
|
||||
settingsSource,
|
||||
modelProvidersSource,
|
||||
defaultSource,
|
||||
computedSource,
|
||||
type ConfigSource,
|
||||
type ConfigSources,
|
||||
type ConfigLayer,
|
||||
} from '../utils/configResolver.js';
|
||||
import {
|
||||
AUTH_ENV_MAPPINGS,
|
||||
DEFAULT_MODELS,
|
||||
QWEN_OAUTH_ALLOWED_MODELS,
|
||||
MODEL_GENERATION_CONFIG_FIELDS,
|
||||
} from './constants.js';
|
||||
import type { ResolvedModelConfig } from './types.js';
|
||||
export {
|
||||
validateModelConfig,
|
||||
type ModelConfigValidationResult,
|
||||
} from '../core/contentGenerator.js';
|
||||
|
||||
/**
|
||||
* CLI-provided configuration values
|
||||
*/
|
||||
export interface ModelConfigCliInput {
|
||||
model?: string;
|
||||
apiKey?: string;
|
||||
baseUrl?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Settings-provided configuration values
|
||||
*/
|
||||
export interface ModelConfigSettingsInput {
|
||||
/** Model name from settings.model.name */
|
||||
model?: string;
|
||||
/** API key from settings.security.auth.apiKey */
|
||||
apiKey?: string;
|
||||
/** Base URL from settings.security.auth.baseUrl */
|
||||
baseUrl?: string;
|
||||
/** Generation config from settings.model.generationConfig */
|
||||
generationConfig?: Partial<ContentGeneratorConfig>;
|
||||
}
|
||||
|
||||
/**
|
||||
* All input sources for model configuration resolution
|
||||
*/
|
||||
export interface ModelConfigSourcesInput {
|
||||
/** Authentication type */
|
||||
authType?: AuthType;
|
||||
|
||||
/** CLI arguments (highest priority for user-provided values) */
|
||||
cli?: ModelConfigCliInput;
|
||||
|
||||
/** Settings file configuration */
|
||||
settings?: ModelConfigSettingsInput;
|
||||
|
||||
/** Environment variables (injected for testability) */
|
||||
env: Record<string, string | undefined>;
|
||||
|
||||
/** Resolved model from ModelProviders (explicit selection, highest priority) */
|
||||
modelProvider?: ResolvedModelConfig;
|
||||
|
||||
/** Proxy URL (computed from Config) */
|
||||
proxy?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Result of model configuration resolution
|
||||
*/
|
||||
export interface ModelConfigResolutionResult {
|
||||
/** The fully resolved configuration */
|
||||
config: ContentGeneratorConfig;
|
||||
/** Source attribution for each field */
|
||||
sources: ConfigSources;
|
||||
/** Warnings generated during resolution */
|
||||
warnings: string[];
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve model configuration from all input sources.
|
||||
*
|
||||
* This is the single entry point for model configuration resolution.
|
||||
* It replaces the duplicate logic in:
|
||||
* - packages/cli/src/utils/modelProviderUtils.ts (resolveCliGenerationConfig)
|
||||
* - packages/core/src/core/contentGenerator.ts (resolveContentGeneratorConfigWithSources)
|
||||
*
|
||||
* @param input - All configuration sources
|
||||
* @returns Resolved configuration with source tracking
|
||||
*/
|
||||
export function resolveModelConfig(
|
||||
input: ModelConfigSourcesInput,
|
||||
): ModelConfigResolutionResult {
|
||||
const { authType, cli, settings, env, modelProvider, proxy } = input;
|
||||
const warnings: string[] = [];
|
||||
const sources: ConfigSources = {};
|
||||
|
||||
// Special handling for Qwen OAuth
|
||||
if (authType === AuthType.QWEN_OAUTH) {
|
||||
return resolveQwenOAuthConfig(input, warnings);
|
||||
}
|
||||
|
||||
// Get auth-specific env var mappings.
|
||||
// If authType is not provided, do not read any auth env vars.
|
||||
const envMapping = authType
|
||||
? AUTH_ENV_MAPPINGS[authType]
|
||||
: { model: [], apiKey: [], baseUrl: [] };
|
||||
|
||||
// Build layers for each field in priority order
|
||||
// Priority: modelProvider > cli > env > settings > default
|
||||
|
||||
// ---- Model ----
|
||||
const modelLayers: Array<ConfigLayer<string>> = [];
|
||||
|
||||
if (authType && modelProvider) {
|
||||
modelLayers.push(
|
||||
layer(
|
||||
modelProvider.id,
|
||||
modelProvidersSource(authType, modelProvider.id, 'model.id'),
|
||||
),
|
||||
);
|
||||
}
|
||||
if (cli?.model) {
|
||||
modelLayers.push(layer(cli.model, cliSource('--model')));
|
||||
}
|
||||
for (const envKey of envMapping.model) {
|
||||
modelLayers.push(envLayer(env, envKey));
|
||||
}
|
||||
if (settings?.model) {
|
||||
modelLayers.push(layer(settings.model, settingsSource('model.name')));
|
||||
}
|
||||
|
||||
const defaultModel = authType ? DEFAULT_MODELS[authType] : '';
|
||||
const modelResult = resolveField(
|
||||
modelLayers,
|
||||
defaultModel,
|
||||
defaultSource(defaultModel),
|
||||
);
|
||||
sources['model'] = modelResult.source;
|
||||
|
||||
// ---- API Key ----
|
||||
const apiKeyLayers: Array<ConfigLayer<string>> = [];
|
||||
|
||||
// For modelProvider, read from the specified envKey
|
||||
if (authType && modelProvider?.envKey) {
|
||||
const apiKeyFromEnv = env[modelProvider.envKey];
|
||||
if (apiKeyFromEnv) {
|
||||
apiKeyLayers.push(
|
||||
layer(apiKeyFromEnv, {
|
||||
kind: 'env',
|
||||
envKey: modelProvider.envKey,
|
||||
via: modelProvidersSource(authType, modelProvider.id, 'envKey'),
|
||||
}),
|
||||
);
|
||||
}
|
||||
}
|
||||
if (cli?.apiKey) {
|
||||
apiKeyLayers.push(layer(cli.apiKey, cliSource('--openaiApiKey')));
|
||||
}
|
||||
for (const envKey of envMapping.apiKey) {
|
||||
apiKeyLayers.push(envLayer(env, envKey));
|
||||
}
|
||||
if (settings?.apiKey) {
|
||||
apiKeyLayers.push(
|
||||
layer(settings.apiKey, settingsSource('security.auth.apiKey')),
|
||||
);
|
||||
}
|
||||
|
||||
const apiKeyResult = resolveOptionalField(apiKeyLayers);
|
||||
if (apiKeyResult) {
|
||||
sources['apiKey'] = apiKeyResult.source;
|
||||
}
|
||||
|
||||
// ---- Base URL ----
|
||||
const baseUrlLayers: Array<ConfigLayer<string>> = [];
|
||||
|
||||
if (authType && modelProvider?.baseUrl) {
|
||||
baseUrlLayers.push(
|
||||
layer(
|
||||
modelProvider.baseUrl,
|
||||
modelProvidersSource(authType, modelProvider.id, 'baseUrl'),
|
||||
),
|
||||
);
|
||||
}
|
||||
if (cli?.baseUrl) {
|
||||
baseUrlLayers.push(layer(cli.baseUrl, cliSource('--openaiBaseUrl')));
|
||||
}
|
||||
for (const envKey of envMapping.baseUrl) {
|
||||
baseUrlLayers.push(envLayer(env, envKey));
|
||||
}
|
||||
if (settings?.baseUrl) {
|
||||
baseUrlLayers.push(
|
||||
layer(settings.baseUrl, settingsSource('security.auth.baseUrl')),
|
||||
);
|
||||
}
|
||||
|
||||
const baseUrlResult = resolveOptionalField(baseUrlLayers);
|
||||
if (baseUrlResult) {
|
||||
sources['baseUrl'] = baseUrlResult.source;
|
||||
}
|
||||
|
||||
// ---- API Key Env Key (for error messages) ----
|
||||
let apiKeyEnvKey: string | undefined;
|
||||
if (authType && modelProvider?.envKey) {
|
||||
apiKeyEnvKey = modelProvider.envKey;
|
||||
sources['apiKeyEnvKey'] = modelProvidersSource(
|
||||
authType,
|
||||
modelProvider.id,
|
||||
'envKey',
|
||||
);
|
||||
}
|
||||
|
||||
// ---- Generation Config (from settings or modelProvider) ----
|
||||
const generationConfig = resolveGenerationConfig(
|
||||
settings?.generationConfig,
|
||||
modelProvider?.generationConfig,
|
||||
authType,
|
||||
modelProvider?.id,
|
||||
sources,
|
||||
);
|
||||
|
||||
// Build final config
|
||||
const config: ContentGeneratorConfig = {
|
||||
authType,
|
||||
model: modelResult.value || '',
|
||||
apiKey: apiKeyResult?.value,
|
||||
apiKeyEnvKey,
|
||||
baseUrl: baseUrlResult?.value,
|
||||
proxy,
|
||||
...generationConfig,
|
||||
};
|
||||
|
||||
// Add proxy source
|
||||
if (proxy) {
|
||||
sources['proxy'] = computedSource('Config.getProxy()');
|
||||
}
|
||||
|
||||
// Add authType source
|
||||
sources['authType'] = computedSource('provided by caller');
|
||||
|
||||
return { config, sources, warnings };
|
||||
}
|
||||
|
||||
/**
|
||||
* Special resolver for Qwen OAuth authentication.
|
||||
* Qwen OAuth has fixed model options and uses dynamic tokens.
|
||||
*/
|
||||
function resolveQwenOAuthConfig(
|
||||
input: ModelConfigSourcesInput,
|
||||
warnings: string[],
|
||||
): ModelConfigResolutionResult {
|
||||
const { cli, settings, proxy } = input;
|
||||
const sources: ConfigSources = {};
|
||||
|
||||
// Qwen OAuth only allows specific models
|
||||
const allowedModels = new Set<string>(QWEN_OAUTH_ALLOWED_MODELS);
|
||||
|
||||
// Determine requested model
|
||||
const requestedModel = cli?.model || settings?.model;
|
||||
let resolvedModel: string;
|
||||
let modelSource: ConfigSource;
|
||||
|
||||
if (requestedModel && allowedModels.has(requestedModel)) {
|
||||
resolvedModel = requestedModel;
|
||||
modelSource = cli?.model
|
||||
? cliSource('--model')
|
||||
: settingsSource('model.name');
|
||||
} else {
|
||||
if (requestedModel) {
|
||||
warnings.push(
|
||||
`Unsupported Qwen OAuth model '${requestedModel}', falling back to '${DEFAULT_QWEN_MODEL}'.`,
|
||||
);
|
||||
}
|
||||
resolvedModel = DEFAULT_QWEN_MODEL;
|
||||
modelSource = defaultSource(`fallback to '${DEFAULT_QWEN_MODEL}'`);
|
||||
}
|
||||
|
||||
sources['model'] = modelSource;
|
||||
sources['apiKey'] = computedSource('Qwen OAuth dynamic token');
|
||||
sources['authType'] = computedSource('provided by caller');
|
||||
|
||||
if (proxy) {
|
||||
sources['proxy'] = computedSource('Config.getProxy()');
|
||||
}
|
||||
|
||||
// Resolve generation config from settings
|
||||
const generationConfig = resolveGenerationConfig(
|
||||
settings?.generationConfig,
|
||||
undefined,
|
||||
AuthType.QWEN_OAUTH,
|
||||
resolvedModel,
|
||||
sources,
|
||||
);
|
||||
|
||||
const config: ContentGeneratorConfig = {
|
||||
authType: AuthType.QWEN_OAUTH,
|
||||
model: resolvedModel,
|
||||
apiKey: 'QWEN_OAUTH_DYNAMIC_TOKEN',
|
||||
proxy,
|
||||
...generationConfig,
|
||||
};
|
||||
|
||||
return { config, sources, warnings };
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve generation config fields (samplingParams, timeout, etc.)
|
||||
*/
|
||||
function resolveGenerationConfig(
|
||||
settingsConfig: Partial<ContentGeneratorConfig> | undefined,
|
||||
modelProviderConfig: Partial<ContentGeneratorConfig> | undefined,
|
||||
authType: AuthType | undefined,
|
||||
modelId: string | undefined,
|
||||
sources: ConfigSources,
|
||||
): Partial<ContentGeneratorConfig> {
|
||||
const result: Partial<ContentGeneratorConfig> = {};
|
||||
|
||||
for (const field of MODEL_GENERATION_CONFIG_FIELDS) {
|
||||
// ModelProvider config takes priority
|
||||
if (authType && modelProviderConfig && field in modelProviderConfig) {
|
||||
// eslint-disable-next-line @typescript-eslint/no-explicit-any
|
||||
(result as any)[field] = modelProviderConfig[field];
|
||||
sources[field] = modelProvidersSource(
|
||||
authType,
|
||||
modelId || '',
|
||||
`generationConfig.${field}`,
|
||||
);
|
||||
} else if (settingsConfig && field in settingsConfig) {
|
||||
// eslint-disable-next-line @typescript-eslint/no-explicit-any
|
||||
(result as any)[field] = settingsConfig[field];
|
||||
sources[field] = settingsSource(`model.generationConfig.${field}`);
|
||||
}
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
388
packages/core/src/models/modelRegistry.test.ts
Normal file
388
packages/core/src/models/modelRegistry.test.ts
Normal file
@@ -0,0 +1,388 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { describe, it, expect, beforeEach, vi } from 'vitest';
|
||||
import { ModelRegistry, QWEN_OAUTH_MODELS } from './modelRegistry.js';
|
||||
import { AuthType } from '../core/contentGenerator.js';
|
||||
import type { ModelProvidersConfig } from './types.js';
|
||||
|
||||
describe('ModelRegistry', () => {
|
||||
describe('initialization', () => {
|
||||
it('should always include hard-coded qwen-oauth models', () => {
|
||||
const registry = new ModelRegistry();
|
||||
|
||||
const qwenModels = registry.getModelsForAuthType(AuthType.QWEN_OAUTH);
|
||||
expect(qwenModels.length).toBe(QWEN_OAUTH_MODELS.length);
|
||||
expect(qwenModels[0].id).toBe('coder-model');
|
||||
expect(qwenModels[1].id).toBe('vision-model');
|
||||
});
|
||||
|
||||
it('should initialize with empty config', () => {
|
||||
const registry = new ModelRegistry();
|
||||
expect(registry.getModelsForAuthType(AuthType.QWEN_OAUTH).length).toBe(
|
||||
QWEN_OAUTH_MODELS.length,
|
||||
);
|
||||
expect(registry.getModelsForAuthType(AuthType.USE_OPENAI).length).toBe(0);
|
||||
});
|
||||
|
||||
it('should initialize with custom models config', () => {
|
||||
const modelProvidersConfig: ModelProvidersConfig = {
|
||||
openai: [
|
||||
{
|
||||
id: 'gpt-4-turbo',
|
||||
name: 'GPT-4 Turbo',
|
||||
baseUrl: 'https://api.openai.com/v1',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const registry = new ModelRegistry(modelProvidersConfig);
|
||||
|
||||
const openaiModels = registry.getModelsForAuthType(AuthType.USE_OPENAI);
|
||||
expect(openaiModels.length).toBe(1);
|
||||
expect(openaiModels[0].id).toBe('gpt-4-turbo');
|
||||
});
|
||||
|
||||
it('should ignore qwen-oauth models in config (hard-coded)', () => {
|
||||
const modelProvidersConfig: ModelProvidersConfig = {
|
||||
'qwen-oauth': [
|
||||
{
|
||||
id: 'custom-qwen',
|
||||
name: 'Custom Qwen',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const registry = new ModelRegistry(modelProvidersConfig);
|
||||
|
||||
// Should still use hard-coded qwen-oauth models
|
||||
const qwenModels = registry.getModelsForAuthType(AuthType.QWEN_OAUTH);
|
||||
expect(qwenModels.length).toBe(QWEN_OAUTH_MODELS.length);
|
||||
expect(qwenModels.find((m) => m.id === 'custom-qwen')).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
||||
describe('getModelsForAuthType', () => {
|
||||
let registry: ModelRegistry;
|
||||
|
||||
beforeEach(() => {
|
||||
const modelProvidersConfig: ModelProvidersConfig = {
|
||||
openai: [
|
||||
{
|
||||
id: 'gpt-4-turbo',
|
||||
name: 'GPT-4 Turbo',
|
||||
description: 'Most capable GPT-4',
|
||||
baseUrl: 'https://api.openai.com/v1',
|
||||
capabilities: { vision: true },
|
||||
},
|
||||
{
|
||||
id: 'gpt-3.5-turbo',
|
||||
name: 'GPT-3.5 Turbo',
|
||||
capabilities: { vision: false },
|
||||
},
|
||||
],
|
||||
};
|
||||
registry = new ModelRegistry(modelProvidersConfig);
|
||||
});
|
||||
|
||||
it('should return models for existing authType', () => {
|
||||
const models = registry.getModelsForAuthType(AuthType.USE_OPENAI);
|
||||
expect(models.length).toBe(2);
|
||||
});
|
||||
|
||||
it('should return empty array for non-existent authType', () => {
|
||||
const models = registry.getModelsForAuthType(AuthType.USE_VERTEX_AI);
|
||||
expect(models.length).toBe(0);
|
||||
});
|
||||
|
||||
it('should return AvailableModel format with correct fields', () => {
|
||||
const models = registry.getModelsForAuthType(AuthType.USE_OPENAI);
|
||||
const gpt4 = models.find((m) => m.id === 'gpt-4-turbo');
|
||||
|
||||
expect(gpt4).toBeDefined();
|
||||
expect(gpt4?.label).toBe('GPT-4 Turbo');
|
||||
expect(gpt4?.description).toBe('Most capable GPT-4');
|
||||
expect(gpt4?.isVision).toBe(true);
|
||||
expect(gpt4?.authType).toBe(AuthType.USE_OPENAI);
|
||||
});
|
||||
});
|
||||
|
||||
describe('getModel', () => {
|
||||
let registry: ModelRegistry;
|
||||
|
||||
beforeEach(() => {
|
||||
const modelProvidersConfig: ModelProvidersConfig = {
|
||||
openai: [
|
||||
{
|
||||
id: 'gpt-4-turbo',
|
||||
name: 'GPT-4 Turbo',
|
||||
baseUrl: 'https://api.openai.com/v1',
|
||||
generationConfig: {
|
||||
samplingParams: {
|
||||
temperature: 0.8,
|
||||
max_tokens: 4096,
|
||||
},
|
||||
},
|
||||
},
|
||||
],
|
||||
};
|
||||
registry = new ModelRegistry(modelProvidersConfig);
|
||||
});
|
||||
|
||||
it('should return resolved model config', () => {
|
||||
const model = registry.getModel(AuthType.USE_OPENAI, 'gpt-4-turbo');
|
||||
|
||||
expect(model).toBeDefined();
|
||||
expect(model?.id).toBe('gpt-4-turbo');
|
||||
expect(model?.name).toBe('GPT-4 Turbo');
|
||||
expect(model?.authType).toBe(AuthType.USE_OPENAI);
|
||||
expect(model?.baseUrl).toBe('https://api.openai.com/v1');
|
||||
});
|
||||
|
||||
it('should preserve generationConfig without applying defaults', () => {
|
||||
const model = registry.getModel(AuthType.USE_OPENAI, 'gpt-4-turbo');
|
||||
|
||||
expect(model?.generationConfig.samplingParams?.temperature).toBe(0.8);
|
||||
expect(model?.generationConfig.samplingParams?.max_tokens).toBe(4096);
|
||||
// No defaults are applied - only the configured values are present
|
||||
expect(model?.generationConfig.samplingParams?.top_p).toBeUndefined();
|
||||
expect(model?.generationConfig.timeout).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should return undefined for non-existent model', () => {
|
||||
const model = registry.getModel(AuthType.USE_OPENAI, 'non-existent');
|
||||
expect(model).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should return undefined for non-existent authType', () => {
|
||||
const model = registry.getModel(AuthType.USE_VERTEX_AI, 'some-model');
|
||||
expect(model).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
||||
describe('hasModel', () => {
|
||||
let registry: ModelRegistry;
|
||||
|
||||
beforeEach(() => {
|
||||
registry = new ModelRegistry({
|
||||
openai: [{ id: 'gpt-4', name: 'GPT-4' }],
|
||||
});
|
||||
});
|
||||
|
||||
it('should return true for existing model', () => {
|
||||
expect(registry.hasModel(AuthType.USE_OPENAI, 'gpt-4')).toBe(true);
|
||||
});
|
||||
|
||||
it('should return false for non-existent model', () => {
|
||||
expect(registry.hasModel(AuthType.USE_OPENAI, 'non-existent')).toBe(
|
||||
false,
|
||||
);
|
||||
});
|
||||
|
||||
it('should return false for non-existent authType', () => {
|
||||
expect(registry.hasModel(AuthType.USE_VERTEX_AI, 'gpt-4')).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('getDefaultModelForAuthType', () => {
|
||||
it('should return coder-model for qwen-oauth', () => {
|
||||
const registry = new ModelRegistry();
|
||||
const defaultModel = registry.getDefaultModelForAuthType(
|
||||
AuthType.QWEN_OAUTH,
|
||||
);
|
||||
expect(defaultModel?.id).toBe('coder-model');
|
||||
});
|
||||
|
||||
it('should return first model for other authTypes', () => {
|
||||
const registry = new ModelRegistry({
|
||||
openai: [
|
||||
{ id: 'gpt-4', name: 'GPT-4' },
|
||||
{ id: 'gpt-3.5', name: 'GPT-3.5' },
|
||||
],
|
||||
});
|
||||
|
||||
const defaultModel = registry.getDefaultModelForAuthType(
|
||||
AuthType.USE_OPENAI,
|
||||
);
|
||||
expect(defaultModel?.id).toBe('gpt-4');
|
||||
});
|
||||
});
|
||||
|
||||
describe('validation', () => {
|
||||
it('should throw error for model without id', () => {
|
||||
expect(
|
||||
() =>
|
||||
new ModelRegistry({
|
||||
openai: [{ id: '', name: 'No ID' }],
|
||||
}),
|
||||
).toThrow('missing required field: id');
|
||||
});
|
||||
});
|
||||
|
||||
describe('default base URLs', () => {
|
||||
it('should apply default dashscope URL for qwen-oauth', () => {
|
||||
const registry = new ModelRegistry();
|
||||
const model = registry.getModel(AuthType.QWEN_OAUTH, 'coder-model');
|
||||
expect(model?.baseUrl).toBe('DYNAMIC_QWEN_OAUTH_BASE_URL');
|
||||
});
|
||||
|
||||
it('should apply default openai URL when not specified', () => {
|
||||
const registry = new ModelRegistry({
|
||||
openai: [{ id: 'gpt-4', name: 'GPT-4' }],
|
||||
});
|
||||
|
||||
const model = registry.getModel(AuthType.USE_OPENAI, 'gpt-4');
|
||||
expect(model?.baseUrl).toBe('https://api.openai.com/v1');
|
||||
});
|
||||
|
||||
it('should use custom baseUrl when specified', () => {
|
||||
const registry = new ModelRegistry({
|
||||
openai: [
|
||||
{
|
||||
id: 'deepseek',
|
||||
name: 'DeepSeek',
|
||||
baseUrl: 'https://api.deepseek.com/v1',
|
||||
},
|
||||
],
|
||||
});
|
||||
|
||||
const model = registry.getModel(AuthType.USE_OPENAI, 'deepseek');
|
||||
expect(model?.baseUrl).toBe('https://api.deepseek.com/v1');
|
||||
});
|
||||
});
|
||||
|
||||
describe('authType key validation', () => {
|
||||
it('should accept valid authType keys', () => {
|
||||
const registry = new ModelRegistry({
|
||||
openai: [{ id: 'gpt-4', name: 'GPT-4' }],
|
||||
gemini: [{ id: 'gemini-pro', name: 'Gemini Pro' }],
|
||||
});
|
||||
|
||||
const openaiModels = registry.getModelsForAuthType(AuthType.USE_OPENAI);
|
||||
expect(openaiModels.length).toBe(1);
|
||||
expect(openaiModels[0].id).toBe('gpt-4');
|
||||
|
||||
const geminiModels = registry.getModelsForAuthType(AuthType.USE_GEMINI);
|
||||
expect(geminiModels.length).toBe(1);
|
||||
expect(geminiModels[0].id).toBe('gemini-pro');
|
||||
});
|
||||
|
||||
it('should skip invalid authType keys with warning', () => {
|
||||
const consoleWarnSpy = vi
|
||||
.spyOn(console, 'warn')
|
||||
.mockImplementation(() => undefined);
|
||||
|
||||
const registry = new ModelRegistry({
|
||||
openai: [{ id: 'gpt-4', name: 'GPT-4' }],
|
||||
'invalid-key': [{ id: 'some-model', name: 'Some Model' }],
|
||||
} as unknown as ModelProvidersConfig);
|
||||
|
||||
expect(consoleWarnSpy).toHaveBeenCalledWith(
|
||||
expect.stringContaining('[ModelRegistry] Invalid authType key'),
|
||||
);
|
||||
expect(consoleWarnSpy).toHaveBeenCalledWith(
|
||||
expect.stringContaining('invalid-key'),
|
||||
);
|
||||
expect(consoleWarnSpy).toHaveBeenCalledWith(
|
||||
expect.stringContaining('Expected one of:'),
|
||||
);
|
||||
|
||||
// Valid key should be registered
|
||||
expect(registry.getModelsForAuthType(AuthType.USE_OPENAI).length).toBe(1);
|
||||
|
||||
// Invalid key should be skipped (no crash)
|
||||
const openaiModels = registry.getModelsForAuthType(AuthType.USE_OPENAI);
|
||||
expect(openaiModels.length).toBe(1);
|
||||
|
||||
consoleWarnSpy.mockRestore();
|
||||
});
|
||||
|
||||
it('should handle mixed valid and invalid keys', () => {
|
||||
const consoleWarnSpy = vi
|
||||
.spyOn(console, 'warn')
|
||||
.mockImplementation(() => undefined);
|
||||
|
||||
const registry = new ModelRegistry({
|
||||
openai: [{ id: 'gpt-4', name: 'GPT-4' }],
|
||||
'bad-key-1': [{ id: 'model-1', name: 'Model 1' }],
|
||||
gemini: [{ id: 'gemini-pro', name: 'Gemini Pro' }],
|
||||
'bad-key-2': [{ id: 'model-2', name: 'Model 2' }],
|
||||
} as unknown as ModelProvidersConfig);
|
||||
|
||||
// Should warn twice for the two invalid keys
|
||||
expect(consoleWarnSpy).toHaveBeenCalledTimes(2);
|
||||
expect(consoleWarnSpy).toHaveBeenCalledWith(
|
||||
expect.stringContaining('bad-key-1'),
|
||||
);
|
||||
expect(consoleWarnSpy).toHaveBeenCalledWith(
|
||||
expect.stringContaining('bad-key-2'),
|
||||
);
|
||||
|
||||
// Valid keys should be registered
|
||||
expect(registry.getModelsForAuthType(AuthType.USE_OPENAI).length).toBe(1);
|
||||
expect(registry.getModelsForAuthType(AuthType.USE_GEMINI).length).toBe(1);
|
||||
|
||||
// Invalid keys should be skipped
|
||||
const openaiModels = registry.getModelsForAuthType(AuthType.USE_OPENAI);
|
||||
expect(openaiModels.length).toBe(1);
|
||||
|
||||
const geminiModels = registry.getModelsForAuthType(AuthType.USE_GEMINI);
|
||||
expect(geminiModels.length).toBe(1);
|
||||
|
||||
consoleWarnSpy.mockRestore();
|
||||
});
|
||||
|
||||
it('should list all valid AuthType values in warning message', () => {
|
||||
const consoleWarnSpy = vi
|
||||
.spyOn(console, 'warn')
|
||||
.mockImplementation(() => undefined);
|
||||
|
||||
new ModelRegistry({
|
||||
'invalid-auth': [{ id: 'model', name: 'Model' }],
|
||||
} as unknown as ModelProvidersConfig);
|
||||
|
||||
expect(consoleWarnSpy).toHaveBeenCalledWith(
|
||||
expect.stringContaining('openai'),
|
||||
);
|
||||
expect(consoleWarnSpy).toHaveBeenCalledWith(
|
||||
expect.stringContaining('qwen-oauth'),
|
||||
);
|
||||
expect(consoleWarnSpy).toHaveBeenCalledWith(
|
||||
expect.stringContaining('gemini'),
|
||||
);
|
||||
expect(consoleWarnSpy).toHaveBeenCalledWith(
|
||||
expect.stringContaining('vertex-ai'),
|
||||
);
|
||||
expect(consoleWarnSpy).toHaveBeenCalledWith(
|
||||
expect.stringContaining('anthropic'),
|
||||
);
|
||||
|
||||
consoleWarnSpy.mockRestore();
|
||||
});
|
||||
|
||||
it('should work correctly with getModelsForAuthType after validation', () => {
|
||||
const consoleWarnSpy = vi
|
||||
.spyOn(console, 'warn')
|
||||
.mockImplementation(() => undefined);
|
||||
|
||||
const registry = new ModelRegistry({
|
||||
openai: [
|
||||
{ id: 'gpt-4', name: 'GPT-4' },
|
||||
{ id: 'gpt-3.5', name: 'GPT-3.5' },
|
||||
],
|
||||
'invalid-key': [{ id: 'invalid-model', name: 'Invalid Model' }],
|
||||
} as unknown as ModelProvidersConfig);
|
||||
|
||||
const models = registry.getModelsForAuthType(AuthType.USE_OPENAI);
|
||||
expect(models.length).toBe(2);
|
||||
expect(models.find((m) => m.id === 'gpt-4')).toBeDefined();
|
||||
expect(models.find((m) => m.id === 'gpt-3.5')).toBeDefined();
|
||||
expect(models.find((m) => m.id === 'invalid-model')).toBeUndefined();
|
||||
|
||||
consoleWarnSpy.mockRestore();
|
||||
});
|
||||
});
|
||||
});
|
||||
180
packages/core/src/models/modelRegistry.ts
Normal file
180
packages/core/src/models/modelRegistry.ts
Normal file
@@ -0,0 +1,180 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { AuthType } from '../core/contentGenerator.js';
|
||||
import { DEFAULT_OPENAI_BASE_URL } from '../core/openaiContentGenerator/constants.js';
|
||||
import {
|
||||
type ModelConfig,
|
||||
type ModelProvidersConfig,
|
||||
type ResolvedModelConfig,
|
||||
type AvailableModel,
|
||||
} from './types.js';
|
||||
import { DEFAULT_QWEN_MODEL } from '../config/models.js';
|
||||
import { QWEN_OAUTH_MODELS } from './constants.js';
|
||||
|
||||
export { QWEN_OAUTH_MODELS } from './constants.js';
|
||||
|
||||
/**
|
||||
* Validates if a string key is a valid AuthType enum value.
|
||||
* @param key - The key to validate
|
||||
* @returns The validated AuthType or undefined if invalid
|
||||
*/
|
||||
function validateAuthTypeKey(key: string): AuthType | undefined {
|
||||
// Check if the key is a valid AuthType enum value
|
||||
if (Object.values(AuthType).includes(key as AuthType)) {
|
||||
return key as AuthType;
|
||||
}
|
||||
|
||||
// Invalid key
|
||||
return undefined;
|
||||
}
|
||||
|
||||
/**
|
||||
* Central registry for managing model configurations.
|
||||
* Models are organized by authType.
|
||||
*/
|
||||
export class ModelRegistry {
|
||||
private modelsByAuthType: Map<AuthType, Map<string, ResolvedModelConfig>>;
|
||||
|
||||
private getDefaultBaseUrl(authType: AuthType): string {
|
||||
switch (authType) {
|
||||
case AuthType.QWEN_OAUTH:
|
||||
return 'DYNAMIC_QWEN_OAUTH_BASE_URL';
|
||||
case AuthType.USE_OPENAI:
|
||||
return DEFAULT_OPENAI_BASE_URL;
|
||||
default:
|
||||
return '';
|
||||
}
|
||||
}
|
||||
|
||||
constructor(modelProvidersConfig?: ModelProvidersConfig) {
|
||||
this.modelsByAuthType = new Map();
|
||||
|
||||
// Always register qwen-oauth models (hard-coded, cannot be overridden)
|
||||
this.registerAuthTypeModels(AuthType.QWEN_OAUTH, QWEN_OAUTH_MODELS);
|
||||
|
||||
// Register user-configured models for other authTypes
|
||||
if (modelProvidersConfig) {
|
||||
for (const [rawKey, models] of Object.entries(modelProvidersConfig)) {
|
||||
const authType = validateAuthTypeKey(rawKey);
|
||||
|
||||
if (!authType) {
|
||||
console.warn(
|
||||
`[ModelRegistry] Invalid authType key "${rawKey}" in modelProviders config. Expected one of: ${Object.values(AuthType).join(', ')}. Skipping.`,
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Skip qwen-oauth as it uses hard-coded models
|
||||
if (authType === AuthType.QWEN_OAUTH) {
|
||||
continue;
|
||||
}
|
||||
|
||||
this.registerAuthTypeModels(authType, models);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Register models for an authType
|
||||
*/
|
||||
private registerAuthTypeModels(
|
||||
authType: AuthType,
|
||||
models: ModelConfig[],
|
||||
): void {
|
||||
const modelMap = new Map<string, ResolvedModelConfig>();
|
||||
|
||||
for (const config of models) {
|
||||
const resolved = this.resolveModelConfig(config, authType);
|
||||
modelMap.set(config.id, resolved);
|
||||
}
|
||||
|
||||
this.modelsByAuthType.set(authType, modelMap);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all models for a specific authType.
|
||||
* This is used by /model command to show only relevant models.
|
||||
*/
|
||||
getModelsForAuthType(authType: AuthType): AvailableModel[] {
|
||||
const models = this.modelsByAuthType.get(authType);
|
||||
if (!models) return [];
|
||||
|
||||
return Array.from(models.values()).map((model) => ({
|
||||
id: model.id,
|
||||
label: model.name,
|
||||
description: model.description,
|
||||
capabilities: model.capabilities,
|
||||
authType: model.authType,
|
||||
isVision: model.capabilities?.vision ?? false,
|
||||
}));
|
||||
}
|
||||
|
||||
/**
|
||||
* Get model configuration by authType and modelId
|
||||
*/
|
||||
getModel(
|
||||
authType: AuthType,
|
||||
modelId: string,
|
||||
): ResolvedModelConfig | undefined {
|
||||
const models = this.modelsByAuthType.get(authType);
|
||||
return models?.get(modelId);
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if model exists for given authType
|
||||
*/
|
||||
hasModel(authType: AuthType, modelId: string): boolean {
|
||||
const models = this.modelsByAuthType.get(authType);
|
||||
return models?.has(modelId) ?? false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get default model for an authType.
|
||||
* For qwen-oauth, returns the coder model.
|
||||
* For others, returns the first configured model.
|
||||
*/
|
||||
getDefaultModelForAuthType(
|
||||
authType: AuthType,
|
||||
): ResolvedModelConfig | undefined {
|
||||
if (authType === AuthType.QWEN_OAUTH) {
|
||||
return this.getModel(authType, DEFAULT_QWEN_MODEL);
|
||||
}
|
||||
const models = this.modelsByAuthType.get(authType);
|
||||
if (!models || models.size === 0) return undefined;
|
||||
return Array.from(models.values())[0];
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve model config by applying defaults
|
||||
*/
|
||||
private resolveModelConfig(
|
||||
config: ModelConfig,
|
||||
authType: AuthType,
|
||||
): ResolvedModelConfig {
|
||||
this.validateModelConfig(config, authType);
|
||||
|
||||
return {
|
||||
...config,
|
||||
authType,
|
||||
name: config.name || config.id,
|
||||
baseUrl: config.baseUrl || this.getDefaultBaseUrl(authType),
|
||||
generationConfig: config.generationConfig ?? {},
|
||||
capabilities: config.capabilities || {},
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate model configuration
|
||||
*/
|
||||
private validateModelConfig(config: ModelConfig, authType: AuthType): void {
|
||||
if (!config.id) {
|
||||
throw new Error(
|
||||
`Model config in authType '${authType}' missing required field: id`,
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
599
packages/core/src/models/modelsConfig.test.ts
Normal file
599
packages/core/src/models/modelsConfig.test.ts
Normal file
@@ -0,0 +1,599 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { describe, it, expect } from 'vitest';
|
||||
import { ModelsConfig } from './modelsConfig.js';
|
||||
import { AuthType } from '../core/contentGenerator.js';
|
||||
import type { ContentGeneratorConfig } from '../core/contentGenerator.js';
|
||||
import type { ModelProvidersConfig } from './types.js';
|
||||
|
||||
describe('ModelsConfig', () => {
|
||||
function deepClone<T>(value: T): T {
|
||||
if (value === null || typeof value !== 'object') return value;
|
||||
if (Array.isArray(value)) return value.map((v) => deepClone(v)) as T;
|
||||
const out: Record<string, unknown> = {};
|
||||
for (const key of Object.keys(value as Record<string, unknown>)) {
|
||||
out[key] = deepClone((value as Record<string, unknown>)[key]);
|
||||
}
|
||||
return out as T;
|
||||
}
|
||||
|
||||
function snapshotGenerationConfig(
|
||||
modelsConfig: ModelsConfig,
|
||||
): ContentGeneratorConfig {
|
||||
return deepClone<ContentGeneratorConfig>(
|
||||
modelsConfig.getGenerationConfig() as ContentGeneratorConfig,
|
||||
);
|
||||
}
|
||||
|
||||
function currentGenerationConfig(
|
||||
modelsConfig: ModelsConfig,
|
||||
): ContentGeneratorConfig {
|
||||
return modelsConfig.getGenerationConfig() as ContentGeneratorConfig;
|
||||
}
|
||||
|
||||
it('should fully rollback state when switchModel fails after applying defaults (authType change)', async () => {
|
||||
const modelProvidersConfig: ModelProvidersConfig = {
|
||||
openai: [
|
||||
{
|
||||
id: 'openai-a',
|
||||
name: 'OpenAI A',
|
||||
baseUrl: 'https://api.openai.example.com/v1',
|
||||
envKey: 'OPENAI_API_KEY',
|
||||
generationConfig: {
|
||||
samplingParams: { temperature: 0.2, max_tokens: 123 },
|
||||
timeout: 111,
|
||||
maxRetries: 1,
|
||||
},
|
||||
},
|
||||
],
|
||||
anthropic: [
|
||||
{
|
||||
id: 'anthropic-b',
|
||||
name: 'Anthropic B',
|
||||
baseUrl: 'https://api.anthropic.example.com/v1',
|
||||
envKey: 'ANTHROPIC_API_KEY',
|
||||
generationConfig: {
|
||||
samplingParams: { temperature: 0.7, max_tokens: 456 },
|
||||
timeout: 222,
|
||||
maxRetries: 2,
|
||||
},
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const modelsConfig = new ModelsConfig({
|
||||
initialAuthType: AuthType.USE_OPENAI,
|
||||
modelProvidersConfig,
|
||||
});
|
||||
|
||||
// Establish a known baseline state via a successful switch.
|
||||
await modelsConfig.switchModel(AuthType.USE_OPENAI, 'openai-a');
|
||||
const baselineAuthType = modelsConfig.getCurrentAuthType();
|
||||
const baselineModel = modelsConfig.getModel();
|
||||
const baselineStrict = modelsConfig.isStrictModelProviderSelection();
|
||||
const baselineGc = snapshotGenerationConfig(modelsConfig);
|
||||
const baselineSources = deepClone(
|
||||
modelsConfig.getGenerationConfigSources(),
|
||||
);
|
||||
|
||||
modelsConfig.setOnModelChange(async () => {
|
||||
throw new Error('refresh failed');
|
||||
});
|
||||
|
||||
await expect(
|
||||
modelsConfig.switchModel(AuthType.USE_ANTHROPIC, 'anthropic-b'),
|
||||
).rejects.toThrow('refresh failed');
|
||||
|
||||
// Ensure state is fully rolled back (selection + generation config + flags).
|
||||
expect(modelsConfig.getCurrentAuthType()).toBe(baselineAuthType);
|
||||
expect(modelsConfig.getModel()).toBe(baselineModel);
|
||||
expect(modelsConfig.isStrictModelProviderSelection()).toBe(baselineStrict);
|
||||
|
||||
const gc = currentGenerationConfig(modelsConfig);
|
||||
expect(gc).toMatchObject({
|
||||
model: baselineGc.model,
|
||||
baseUrl: baselineGc.baseUrl,
|
||||
apiKeyEnvKey: baselineGc.apiKeyEnvKey,
|
||||
samplingParams: baselineGc.samplingParams,
|
||||
timeout: baselineGc.timeout,
|
||||
maxRetries: baselineGc.maxRetries,
|
||||
});
|
||||
|
||||
const sources = modelsConfig.getGenerationConfigSources();
|
||||
expect(sources).toEqual(baselineSources);
|
||||
});
|
||||
|
||||
it('should fully rollback state when switchModel fails after applying defaults', async () => {
|
||||
const modelProvidersConfig: ModelProvidersConfig = {
|
||||
openai: [
|
||||
{
|
||||
id: 'model-a',
|
||||
name: 'Model A',
|
||||
baseUrl: 'https://api.example.com/v1',
|
||||
envKey: 'API_KEY_A',
|
||||
},
|
||||
{
|
||||
id: 'model-b',
|
||||
name: 'Model B',
|
||||
baseUrl: 'https://api.example.com/v1',
|
||||
envKey: 'API_KEY_B',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const modelsConfig = new ModelsConfig({
|
||||
initialAuthType: AuthType.USE_OPENAI,
|
||||
modelProvidersConfig,
|
||||
});
|
||||
|
||||
await modelsConfig.switchModel(AuthType.USE_OPENAI, 'model-a');
|
||||
const baselineModel = modelsConfig.getModel();
|
||||
const baselineGc = snapshotGenerationConfig(modelsConfig);
|
||||
const baselineSources = deepClone(
|
||||
modelsConfig.getGenerationConfigSources(),
|
||||
);
|
||||
|
||||
modelsConfig.setOnModelChange(async () => {
|
||||
throw new Error('hot-update failed');
|
||||
});
|
||||
|
||||
await expect(
|
||||
modelsConfig.switchModel(AuthType.USE_OPENAI, 'model-b'),
|
||||
).rejects.toThrow('hot-update failed');
|
||||
|
||||
expect(modelsConfig.getModel()).toBe(baselineModel);
|
||||
expect(modelsConfig.getGenerationConfig()).toMatchObject({
|
||||
model: baselineGc.model,
|
||||
baseUrl: baselineGc.baseUrl,
|
||||
apiKeyEnvKey: baselineGc.apiKeyEnvKey,
|
||||
});
|
||||
expect(modelsConfig.getGenerationConfigSources()).toEqual(baselineSources);
|
||||
});
|
||||
|
||||
it('should require provider-sourced apiKey when switching models even if envKey is missing', async () => {
|
||||
const modelProvidersConfig: ModelProvidersConfig = {
|
||||
openai: [
|
||||
{
|
||||
id: 'model-a',
|
||||
name: 'Model A',
|
||||
baseUrl: 'https://api.example.com/v1',
|
||||
envKey: 'API_KEY_SHARED',
|
||||
},
|
||||
{
|
||||
id: 'model-b',
|
||||
name: 'Model B',
|
||||
baseUrl: 'https://api.example.com/v1',
|
||||
envKey: 'API_KEY_SHARED',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const modelsConfig = new ModelsConfig({
|
||||
initialAuthType: AuthType.USE_OPENAI,
|
||||
modelProvidersConfig,
|
||||
generationConfig: {
|
||||
model: 'model-a',
|
||||
},
|
||||
});
|
||||
|
||||
// Simulate key prompt flow / explicit key provided via CLI/settings.
|
||||
modelsConfig.updateCredentials({ apiKey: 'manual-key', model: 'model-a' });
|
||||
|
||||
await modelsConfig.switchModel(AuthType.USE_OPENAI, 'model-b');
|
||||
|
||||
const gc = currentGenerationConfig(modelsConfig);
|
||||
expect(gc.model).toBe('model-b');
|
||||
expect(gc.apiKey).toBeUndefined();
|
||||
expect(gc.apiKeyEnvKey).toBe('API_KEY_SHARED');
|
||||
});
|
||||
|
||||
it('should preserve settings generationConfig when model is updated via updateCredentials even if it matches modelProviders', () => {
|
||||
const modelProvidersConfig: ModelProvidersConfig = {
|
||||
openai: [
|
||||
{
|
||||
id: 'model-a',
|
||||
name: 'Model A',
|
||||
baseUrl: 'https://api.example.com/v1',
|
||||
envKey: 'API_KEY_A',
|
||||
generationConfig: {
|
||||
samplingParams: { temperature: 0.1, max_tokens: 123 },
|
||||
timeout: 111,
|
||||
maxRetries: 1,
|
||||
},
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
// Simulate settings.model.generationConfig being resolved into ModelsConfig.generationConfig
|
||||
const modelsConfig = new ModelsConfig({
|
||||
initialAuthType: AuthType.USE_OPENAI,
|
||||
modelProvidersConfig,
|
||||
generationConfig: {
|
||||
model: 'model-a',
|
||||
samplingParams: { temperature: 0.9, max_tokens: 999 },
|
||||
timeout: 9999,
|
||||
maxRetries: 9,
|
||||
},
|
||||
generationConfigSources: {
|
||||
model: { kind: 'settings', detail: 'settings.model.name' },
|
||||
samplingParams: {
|
||||
kind: 'settings',
|
||||
detail: 'settings.model.generationConfig.samplingParams',
|
||||
},
|
||||
timeout: {
|
||||
kind: 'settings',
|
||||
detail: 'settings.model.generationConfig.timeout',
|
||||
},
|
||||
maxRetries: {
|
||||
kind: 'settings',
|
||||
detail: 'settings.model.generationConfig.maxRetries',
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
// User manually updates the model via updateCredentials (e.g. key prompt flow).
|
||||
// Even if the model ID matches a modelProviders entry, we must not apply provider defaults
|
||||
// that would overwrite settings.model.generationConfig.
|
||||
modelsConfig.updateCredentials({ model: 'model-a' });
|
||||
|
||||
modelsConfig.syncAfterAuthRefresh(
|
||||
AuthType.USE_OPENAI,
|
||||
modelsConfig.getModel(),
|
||||
);
|
||||
|
||||
const gc = currentGenerationConfig(modelsConfig);
|
||||
expect(gc.model).toBe('model-a');
|
||||
expect(gc.samplingParams?.temperature).toBe(0.9);
|
||||
expect(gc.samplingParams?.max_tokens).toBe(999);
|
||||
expect(gc.timeout).toBe(9999);
|
||||
expect(gc.maxRetries).toBe(9);
|
||||
});
|
||||
|
||||
it('should preserve settings generationConfig across multiple auth refreshes after updateCredentials', () => {
|
||||
const modelProvidersConfig: ModelProvidersConfig = {
|
||||
openai: [
|
||||
{
|
||||
id: 'model-a',
|
||||
name: 'Model A',
|
||||
baseUrl: 'https://api.example.com/v1',
|
||||
envKey: 'API_KEY_A',
|
||||
generationConfig: {
|
||||
samplingParams: { temperature: 0.1, max_tokens: 123 },
|
||||
timeout: 111,
|
||||
maxRetries: 1,
|
||||
},
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const modelsConfig = new ModelsConfig({
|
||||
initialAuthType: AuthType.USE_OPENAI,
|
||||
modelProvidersConfig,
|
||||
generationConfig: {
|
||||
model: 'model-a',
|
||||
samplingParams: { temperature: 0.9, max_tokens: 999 },
|
||||
timeout: 9999,
|
||||
maxRetries: 9,
|
||||
},
|
||||
generationConfigSources: {
|
||||
model: { kind: 'settings', detail: 'settings.model.name' },
|
||||
samplingParams: {
|
||||
kind: 'settings',
|
||||
detail: 'settings.model.generationConfig.samplingParams',
|
||||
},
|
||||
timeout: {
|
||||
kind: 'settings',
|
||||
detail: 'settings.model.generationConfig.timeout',
|
||||
},
|
||||
maxRetries: {
|
||||
kind: 'settings',
|
||||
detail: 'settings.model.generationConfig.maxRetries',
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
modelsConfig.updateCredentials({
|
||||
apiKey: 'manual-key',
|
||||
baseUrl: 'https://manual.example.com/v1',
|
||||
model: 'model-a',
|
||||
});
|
||||
|
||||
// First auth refresh
|
||||
modelsConfig.syncAfterAuthRefresh(
|
||||
AuthType.USE_OPENAI,
|
||||
modelsConfig.getModel(),
|
||||
);
|
||||
// Second auth refresh should still preserve settings generationConfig
|
||||
modelsConfig.syncAfterAuthRefresh(
|
||||
AuthType.USE_OPENAI,
|
||||
modelsConfig.getModel(),
|
||||
);
|
||||
|
||||
const gc = currentGenerationConfig(modelsConfig);
|
||||
expect(gc.model).toBe('model-a');
|
||||
expect(gc.samplingParams?.temperature).toBe(0.9);
|
||||
expect(gc.samplingParams?.max_tokens).toBe(999);
|
||||
expect(gc.timeout).toBe(9999);
|
||||
expect(gc.maxRetries).toBe(9);
|
||||
});
|
||||
|
||||
it('should clear provider-sourced config when updateCredentials is called after switchModel', async () => {
|
||||
const modelProvidersConfig: ModelProvidersConfig = {
|
||||
openai: [
|
||||
{
|
||||
id: 'provider-model',
|
||||
name: 'Provider Model',
|
||||
baseUrl: 'https://provider.example.com/v1',
|
||||
envKey: 'PROVIDER_API_KEY',
|
||||
generationConfig: {
|
||||
samplingParams: { temperature: 0.1, max_tokens: 100 },
|
||||
timeout: 1000,
|
||||
maxRetries: 2,
|
||||
},
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const modelsConfig = new ModelsConfig({
|
||||
initialAuthType: AuthType.USE_OPENAI,
|
||||
modelProvidersConfig,
|
||||
});
|
||||
|
||||
// Step 1: Switch to a provider model - this applies provider config
|
||||
await modelsConfig.switchModel(AuthType.USE_OPENAI, 'provider-model');
|
||||
|
||||
// Verify provider config is applied
|
||||
let gc = currentGenerationConfig(modelsConfig);
|
||||
expect(gc.model).toBe('provider-model');
|
||||
expect(gc.baseUrl).toBe('https://provider.example.com/v1');
|
||||
expect(gc.samplingParams?.temperature).toBe(0.1);
|
||||
expect(gc.samplingParams?.max_tokens).toBe(100);
|
||||
expect(gc.timeout).toBe(1000);
|
||||
expect(gc.maxRetries).toBe(2);
|
||||
|
||||
// Verify sources are from modelProviders
|
||||
let sources = modelsConfig.getGenerationConfigSources();
|
||||
expect(sources['model']?.kind).toBe('modelProviders');
|
||||
expect(sources['baseUrl']?.kind).toBe('modelProviders');
|
||||
expect(sources['samplingParams']?.kind).toBe('modelProviders');
|
||||
expect(sources['timeout']?.kind).toBe('modelProviders');
|
||||
expect(sources['maxRetries']?.kind).toBe('modelProviders');
|
||||
|
||||
// Step 2: User manually sets credentials via updateCredentials
|
||||
// This should clear all provider-sourced config
|
||||
modelsConfig.updateCredentials({
|
||||
apiKey: 'manual-api-key',
|
||||
model: 'custom-model',
|
||||
});
|
||||
|
||||
// Verify provider-sourced config is cleared
|
||||
gc = currentGenerationConfig(modelsConfig);
|
||||
expect(gc.model).toBe('custom-model'); // Set by updateCredentials
|
||||
expect(gc.apiKey).toBe('manual-api-key'); // Set by updateCredentials
|
||||
expect(gc.baseUrl).toBeUndefined(); // Cleared (was from provider)
|
||||
expect(gc.samplingParams).toBeUndefined(); // Cleared (was from provider)
|
||||
expect(gc.timeout).toBeUndefined(); // Cleared (was from provider)
|
||||
expect(gc.maxRetries).toBeUndefined(); // Cleared (was from provider)
|
||||
|
||||
// Verify sources are updated
|
||||
sources = modelsConfig.getGenerationConfigSources();
|
||||
expect(sources['model']?.kind).toBe('programmatic');
|
||||
expect(sources['apiKey']?.kind).toBe('programmatic');
|
||||
expect(sources['baseUrl']).toBeUndefined(); // Source cleared
|
||||
expect(sources['samplingParams']).toBeUndefined(); // Source cleared
|
||||
expect(sources['timeout']).toBeUndefined(); // Source cleared
|
||||
expect(sources['maxRetries']).toBeUndefined(); // Source cleared
|
||||
});
|
||||
|
||||
it('should preserve non-provider config when updateCredentials clears provider config', async () => {
|
||||
const modelProvidersConfig: ModelProvidersConfig = {
|
||||
openai: [
|
||||
{
|
||||
id: 'provider-model',
|
||||
name: 'Provider Model',
|
||||
baseUrl: 'https://provider.example.com/v1',
|
||||
envKey: 'PROVIDER_API_KEY',
|
||||
generationConfig: {
|
||||
samplingParams: { temperature: 0.1, max_tokens: 100 },
|
||||
timeout: 1000,
|
||||
maxRetries: 2,
|
||||
},
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
// Initialize with settings-sourced config
|
||||
const modelsConfig = new ModelsConfig({
|
||||
initialAuthType: AuthType.USE_OPENAI,
|
||||
modelProvidersConfig,
|
||||
generationConfig: {
|
||||
samplingParams: { temperature: 0.8, max_tokens: 500 },
|
||||
timeout: 5000,
|
||||
},
|
||||
generationConfigSources: {
|
||||
samplingParams: {
|
||||
kind: 'settings',
|
||||
detail: 'settings.model.generationConfig.samplingParams',
|
||||
},
|
||||
timeout: {
|
||||
kind: 'settings',
|
||||
detail: 'settings.model.generationConfig.timeout',
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
// Switch to provider model - this overwrites with provider config
|
||||
await modelsConfig.switchModel(AuthType.USE_OPENAI, 'provider-model');
|
||||
|
||||
// Verify provider config is applied (overwriting settings)
|
||||
let gc = currentGenerationConfig(modelsConfig);
|
||||
expect(gc.samplingParams?.temperature).toBe(0.1);
|
||||
expect(gc.timeout).toBe(1000);
|
||||
|
||||
// User manually sets credentials - clears provider-sourced config
|
||||
modelsConfig.updateCredentials({
|
||||
apiKey: 'manual-key',
|
||||
});
|
||||
|
||||
// Provider-sourced config should be cleared
|
||||
gc = currentGenerationConfig(modelsConfig);
|
||||
expect(gc.samplingParams).toBeUndefined();
|
||||
expect(gc.timeout).toBeUndefined();
|
||||
// The original settings-sourced config is NOT restored automatically;
|
||||
// it should be re-resolved by other layers in refreshAuth
|
||||
});
|
||||
|
||||
it('should always force Qwen OAuth apiKey placeholder when applying model defaults', async () => {
|
||||
// Simulate a stale/explicit apiKey existing before switching models.
|
||||
const modelsConfig = new ModelsConfig({
|
||||
initialAuthType: AuthType.QWEN_OAUTH,
|
||||
generationConfig: {
|
||||
apiKey: 'manual-key-should-not-leak',
|
||||
},
|
||||
});
|
||||
|
||||
// Switching within qwen-oauth triggers applyResolvedModelDefaults().
|
||||
await modelsConfig.switchModel(AuthType.QWEN_OAUTH, 'vision-model');
|
||||
|
||||
const gc = currentGenerationConfig(modelsConfig);
|
||||
expect(gc.apiKey).toBe('QWEN_OAUTH_DYNAMIC_TOKEN');
|
||||
expect(gc.apiKeyEnvKey).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should apply Qwen OAuth apiKey placeholder during syncAfterAuthRefresh for fresh users', () => {
|
||||
// Fresh user: authType not selected yet (currentAuthType undefined).
|
||||
const modelsConfig = new ModelsConfig();
|
||||
|
||||
// Config.refreshAuth passes modelId from modelsConfig.getModel(), which falls back to DEFAULT_QWEN_MODEL.
|
||||
modelsConfig.syncAfterAuthRefresh(
|
||||
AuthType.QWEN_OAUTH,
|
||||
modelsConfig.getModel(),
|
||||
);
|
||||
|
||||
const gc = currentGenerationConfig(modelsConfig);
|
||||
expect(gc.model).toBe('coder-model');
|
||||
expect(gc.apiKey).toBe('QWEN_OAUTH_DYNAMIC_TOKEN');
|
||||
expect(gc.apiKeyEnvKey).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should maintain consistency between currentModelId and _generationConfig.model after initialization', () => {
|
||||
const modelProvidersConfig: ModelProvidersConfig = {
|
||||
openai: [
|
||||
{
|
||||
id: 'test-model',
|
||||
name: 'Test Model',
|
||||
baseUrl: 'https://api.example.com/v1',
|
||||
envKey: 'TEST_API_KEY',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
// Test case 1: generationConfig.model provided with other config
|
||||
const config1 = new ModelsConfig({
|
||||
initialAuthType: AuthType.USE_OPENAI,
|
||||
modelProvidersConfig,
|
||||
generationConfig: {
|
||||
model: 'test-model',
|
||||
samplingParams: { temperature: 0.5 },
|
||||
},
|
||||
});
|
||||
expect(config1.getModel()).toBe('test-model');
|
||||
expect(config1.getGenerationConfig().model).toBe('test-model');
|
||||
|
||||
// Test case 2: generationConfig.model provided
|
||||
const config2 = new ModelsConfig({
|
||||
initialAuthType: AuthType.USE_OPENAI,
|
||||
modelProvidersConfig,
|
||||
generationConfig: {
|
||||
model: 'test-model',
|
||||
},
|
||||
});
|
||||
expect(config2.getModel()).toBe('test-model');
|
||||
expect(config2.getGenerationConfig().model).toBe('test-model');
|
||||
|
||||
// Test case 3: no model provided (empty string fallback)
|
||||
const config3 = new ModelsConfig({
|
||||
initialAuthType: AuthType.USE_OPENAI,
|
||||
modelProvidersConfig,
|
||||
generationConfig: {},
|
||||
});
|
||||
expect(config3.getModel()).toBe('coder-model'); // Falls back to DEFAULT_QWEN_MODEL
|
||||
expect(config3.getGenerationConfig().model).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should maintain consistency between currentModelId and _generationConfig.model during syncAfterAuthRefresh', () => {
|
||||
const modelProvidersConfig: ModelProvidersConfig = {
|
||||
openai: [
|
||||
{
|
||||
id: 'model-a',
|
||||
name: 'Model A',
|
||||
baseUrl: 'https://api.example.com/v1',
|
||||
envKey: 'API_KEY_A',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const modelsConfig = new ModelsConfig({
|
||||
initialAuthType: AuthType.USE_OPENAI,
|
||||
modelProvidersConfig,
|
||||
generationConfig: {
|
||||
model: 'model-a',
|
||||
},
|
||||
});
|
||||
|
||||
// Manually set credentials to trigger preserveManualCredentials path
|
||||
modelsConfig.updateCredentials({ apiKey: 'manual-key' });
|
||||
|
||||
// syncAfterAuthRefresh with a different modelId
|
||||
modelsConfig.syncAfterAuthRefresh(AuthType.USE_OPENAI, 'model-a');
|
||||
|
||||
// Both should be consistent
|
||||
expect(modelsConfig.getModel()).toBe('model-a');
|
||||
expect(modelsConfig.getGenerationConfig().model).toBe('model-a');
|
||||
});
|
||||
|
||||
it('should maintain consistency between currentModelId and _generationConfig.model during setModel', async () => {
|
||||
const modelProvidersConfig: ModelProvidersConfig = {
|
||||
openai: [
|
||||
{
|
||||
id: 'model-a',
|
||||
name: 'Model A',
|
||||
baseUrl: 'https://api.example.com/v1',
|
||||
envKey: 'API_KEY_A',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const modelsConfig = new ModelsConfig({
|
||||
initialAuthType: AuthType.USE_OPENAI,
|
||||
modelProvidersConfig,
|
||||
});
|
||||
|
||||
// setModel with a raw model ID
|
||||
await modelsConfig.setModel('custom-model');
|
||||
|
||||
// Both should be consistent
|
||||
expect(modelsConfig.getModel()).toBe('custom-model');
|
||||
expect(modelsConfig.getGenerationConfig().model).toBe('custom-model');
|
||||
});
|
||||
|
||||
it('should maintain consistency between currentModelId and _generationConfig.model during updateCredentials', () => {
|
||||
const modelsConfig = new ModelsConfig({
|
||||
initialAuthType: AuthType.USE_OPENAI,
|
||||
});
|
||||
|
||||
// updateCredentials with model
|
||||
modelsConfig.updateCredentials({
|
||||
apiKey: 'test-key',
|
||||
model: 'updated-model',
|
||||
});
|
||||
|
||||
// Both should be consistent
|
||||
expect(modelsConfig.getModel()).toBe('updated-model');
|
||||
expect(modelsConfig.getGenerationConfig().model).toBe('updated-model');
|
||||
});
|
||||
});
|
||||
634
packages/core/src/models/modelsConfig.ts
Normal file
634
packages/core/src/models/modelsConfig.ts
Normal file
@@ -0,0 +1,634 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import process from 'node:process';
|
||||
|
||||
import { AuthType } from '../core/contentGenerator.js';
|
||||
import type { ContentGeneratorConfig } from '../core/contentGenerator.js';
|
||||
import type { ContentGeneratorConfigSources } from '../core/contentGenerator.js';
|
||||
import { DEFAULT_QWEN_MODEL } from '../config/models.js';
|
||||
|
||||
import { ModelRegistry } from './modelRegistry.js';
|
||||
import {
|
||||
type ModelProvidersConfig,
|
||||
type ResolvedModelConfig,
|
||||
type AvailableModel,
|
||||
type ModelSwitchMetadata,
|
||||
} from './types.js';
|
||||
import {
|
||||
MODEL_GENERATION_CONFIG_FIELDS,
|
||||
CREDENTIAL_FIELDS,
|
||||
PROVIDER_SOURCED_FIELDS,
|
||||
} from './constants.js';
|
||||
|
||||
export {
|
||||
MODEL_GENERATION_CONFIG_FIELDS,
|
||||
CREDENTIAL_FIELDS,
|
||||
PROVIDER_SOURCED_FIELDS,
|
||||
};
|
||||
|
||||
/**
|
||||
* Callback for when the model changes.
|
||||
* Used by Config to refresh auth/ContentGenerator when needed.
|
||||
*/
|
||||
export type OnModelChangeCallback = (
|
||||
authType: AuthType,
|
||||
requiresRefresh: boolean,
|
||||
) => Promise<void>;
|
||||
|
||||
/**
|
||||
* Options for creating ModelsConfig
|
||||
*/
|
||||
export interface ModelsConfigOptions {
|
||||
/** Initial authType from settings */
|
||||
initialAuthType?: AuthType;
|
||||
/** Model providers configuration */
|
||||
modelProvidersConfig?: ModelProvidersConfig;
|
||||
/** Generation config from CLI/settings */
|
||||
generationConfig?: Partial<ContentGeneratorConfig>;
|
||||
/** Source tracking for generation config */
|
||||
generationConfigSources?: ContentGeneratorConfigSources;
|
||||
/** Callback when model changes require refresh */
|
||||
onModelChange?: OnModelChangeCallback;
|
||||
}
|
||||
|
||||
/**
|
||||
* ModelsConfig manages all model selection logic and state.
|
||||
*
|
||||
* This class encapsulates:
|
||||
* - ModelRegistry for model configuration storage
|
||||
* - Current authType and modelId selection
|
||||
* - Generation config management
|
||||
* - Model switching logic
|
||||
*
|
||||
* Config uses this as a thin entry point for all model-related operations.
|
||||
*/
|
||||
export class ModelsConfig {
|
||||
private readonly modelRegistry: ModelRegistry;
|
||||
|
||||
// Current selection state
|
||||
private currentAuthType: AuthType | undefined;
|
||||
|
||||
// Generation config state
|
||||
private _generationConfig: Partial<ContentGeneratorConfig>;
|
||||
private generationConfigSources: ContentGeneratorConfigSources;
|
||||
|
||||
// Flag for strict model provider selection
|
||||
private strictModelProviderSelection: boolean = false;
|
||||
|
||||
// One-shot flag for qwen-oauth credential caching
|
||||
private requireCachedQwenCredentialsOnce: boolean = false;
|
||||
|
||||
// One-shot flag indicating credentials were manually set via updateCredentials()
|
||||
// When true, syncAfterAuthRefresh should NOT override these credentials with
|
||||
// modelProviders defaults (even if the model ID matches a registry entry).
|
||||
//
|
||||
// This must be persistent across auth refreshes, because refreshAuth() can be
|
||||
// triggered multiple times after a credential prompt flow. We only clear this
|
||||
// flag when we explicitly apply modelProvider defaults (i.e. when the user
|
||||
// switches to a registry model via switchModel).
|
||||
private hasManualCredentials: boolean = false;
|
||||
|
||||
// Callback for notifying Config of model changes
|
||||
private onModelChange?: OnModelChangeCallback;
|
||||
|
||||
// Flag indicating whether authType was explicitly provided (not defaulted)
|
||||
private readonly authTypeWasExplicitlyProvided: boolean;
|
||||
|
||||
private static deepClone<T>(value: T): T {
|
||||
if (value === null || typeof value !== 'object') {
|
||||
return value;
|
||||
}
|
||||
if (Array.isArray(value)) {
|
||||
return value.map((v) => ModelsConfig.deepClone(v)) as T;
|
||||
}
|
||||
const out: Record<string, unknown> = {};
|
||||
for (const key of Object.keys(value as Record<string, unknown>)) {
|
||||
out[key] = ModelsConfig.deepClone(
|
||||
(value as Record<string, unknown>)[key],
|
||||
);
|
||||
}
|
||||
return out as T;
|
||||
}
|
||||
|
||||
private snapshotState(): {
|
||||
currentAuthType: AuthType | undefined;
|
||||
generationConfig: Partial<ContentGeneratorConfig>;
|
||||
generationConfigSources: ContentGeneratorConfigSources;
|
||||
strictModelProviderSelection: boolean;
|
||||
requireCachedQwenCredentialsOnce: boolean;
|
||||
hasManualCredentials: boolean;
|
||||
} {
|
||||
return {
|
||||
currentAuthType: this.currentAuthType,
|
||||
generationConfig: ModelsConfig.deepClone(this._generationConfig),
|
||||
generationConfigSources: ModelsConfig.deepClone(
|
||||
this.generationConfigSources,
|
||||
),
|
||||
strictModelProviderSelection: this.strictModelProviderSelection,
|
||||
requireCachedQwenCredentialsOnce: this.requireCachedQwenCredentialsOnce,
|
||||
hasManualCredentials: this.hasManualCredentials,
|
||||
};
|
||||
}
|
||||
|
||||
private restoreState(
|
||||
snapshot: ReturnType<ModelsConfig['snapshotState']>,
|
||||
): void {
|
||||
this.currentAuthType = snapshot.currentAuthType;
|
||||
this._generationConfig = snapshot.generationConfig;
|
||||
this.generationConfigSources = snapshot.generationConfigSources;
|
||||
this.strictModelProviderSelection = snapshot.strictModelProviderSelection;
|
||||
this.requireCachedQwenCredentialsOnce =
|
||||
snapshot.requireCachedQwenCredentialsOnce;
|
||||
this.hasManualCredentials = snapshot.hasManualCredentials;
|
||||
}
|
||||
|
||||
constructor(options: ModelsConfigOptions = {}) {
|
||||
this.modelRegistry = new ModelRegistry(options.modelProvidersConfig);
|
||||
this.onModelChange = options.onModelChange;
|
||||
|
||||
// Initialize generation config
|
||||
// Note: generationConfig.model should already be fully resolved by ModelConfigResolver
|
||||
// before ModelsConfig is instantiated, so we use it as the single source of truth
|
||||
this._generationConfig = {
|
||||
...(options.generationConfig || {}),
|
||||
};
|
||||
this.generationConfigSources = options.generationConfigSources || {};
|
||||
|
||||
// Track if authType was explicitly provided
|
||||
this.authTypeWasExplicitlyProvided = options.initialAuthType !== undefined;
|
||||
|
||||
// Initialize selection state
|
||||
this.currentAuthType = options.initialAuthType;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get current model ID
|
||||
*/
|
||||
getModel(): string {
|
||||
return this._generationConfig.model || DEFAULT_QWEN_MODEL;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get current authType
|
||||
*/
|
||||
getCurrentAuthType(): AuthType | undefined {
|
||||
return this.currentAuthType;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if authType was explicitly provided (via CLI or settings).
|
||||
* If false, no authType was provided yet (fresh user).
|
||||
*/
|
||||
wasAuthTypeExplicitlyProvided(): boolean {
|
||||
return this.authTypeWasExplicitlyProvided;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get available models for current authType
|
||||
*/
|
||||
getAvailableModels(): AvailableModel[] {
|
||||
return this.currentAuthType
|
||||
? this.modelRegistry.getModelsForAuthType(this.currentAuthType)
|
||||
: [];
|
||||
}
|
||||
|
||||
/**
|
||||
* Get available models for a specific authType
|
||||
*/
|
||||
getAvailableModelsForAuthType(authType: AuthType): AvailableModel[] {
|
||||
return this.modelRegistry.getModelsForAuthType(authType);
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a model exists for the given authType
|
||||
*/
|
||||
hasModel(authType: AuthType, modelId: string): boolean {
|
||||
return this.modelRegistry.hasModel(authType, modelId);
|
||||
}
|
||||
|
||||
/**
|
||||
* Set model programmatically (e.g., VLM auto-switch, fallback).
|
||||
* Supports both registry models and raw model IDs.
|
||||
*/
|
||||
async setModel(
|
||||
newModel: string,
|
||||
metadata?: ModelSwitchMetadata,
|
||||
): Promise<void> {
|
||||
// Special case: qwen-oauth VLM auto-switch - hot update in place
|
||||
if (
|
||||
this.currentAuthType === AuthType.QWEN_OAUTH &&
|
||||
(newModel === DEFAULT_QWEN_MODEL || newModel === 'vision-model')
|
||||
) {
|
||||
this.strictModelProviderSelection = false;
|
||||
this._generationConfig.model = newModel;
|
||||
this.generationConfigSources['model'] = {
|
||||
kind: 'programmatic',
|
||||
detail: metadata?.reason || 'setModel',
|
||||
};
|
||||
return;
|
||||
}
|
||||
|
||||
// If model exists in registry, use full switch logic
|
||||
if (
|
||||
this.currentAuthType &&
|
||||
this.modelRegistry.hasModel(this.currentAuthType, newModel)
|
||||
) {
|
||||
await this.switchModel(this.currentAuthType, newModel);
|
||||
return;
|
||||
}
|
||||
|
||||
// Raw model override: update generation config in-place
|
||||
this.strictModelProviderSelection = false;
|
||||
this._generationConfig.model = newModel;
|
||||
this.generationConfigSources['model'] = {
|
||||
kind: 'programmatic',
|
||||
detail: metadata?.reason || 'setModel',
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Switch model (and optionally authType) via registry-backed selection.
|
||||
* This is a superset of the previous split APIs for model-only vs authType+model switching.
|
||||
*/
|
||||
async switchModel(
|
||||
authType: AuthType,
|
||||
modelId: string,
|
||||
options?: { requireCachedCredentials?: boolean },
|
||||
_metadata?: ModelSwitchMetadata,
|
||||
): Promise<void> {
|
||||
const snapshot = this.snapshotState();
|
||||
if (authType === AuthType.QWEN_OAUTH && options?.requireCachedCredentials) {
|
||||
this.requireCachedQwenCredentialsOnce = true;
|
||||
}
|
||||
|
||||
try {
|
||||
const isAuthTypeChange = authType !== this.currentAuthType;
|
||||
this.currentAuthType = authType;
|
||||
|
||||
const model = this.modelRegistry.getModel(authType, modelId);
|
||||
if (!model) {
|
||||
throw new Error(
|
||||
`Model '${modelId}' not found for authType '${authType}'`,
|
||||
);
|
||||
}
|
||||
|
||||
// Apply model defaults
|
||||
this.applyResolvedModelDefaults(model);
|
||||
|
||||
const requiresRefresh = isAuthTypeChange
|
||||
? true
|
||||
: this.checkRequiresRefresh(snapshot.generationConfig.model || '');
|
||||
|
||||
if (this.onModelChange) {
|
||||
await this.onModelChange(authType, requiresRefresh);
|
||||
}
|
||||
} catch (error) {
|
||||
// Rollback on error
|
||||
this.restoreState(snapshot);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get generation config for ContentGenerator creation
|
||||
*/
|
||||
getGenerationConfig(): Partial<ContentGeneratorConfig> {
|
||||
return this._generationConfig;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get generation config sources for debugging/UI
|
||||
*/
|
||||
getGenerationConfigSources(): ContentGeneratorConfigSources {
|
||||
return this.generationConfigSources;
|
||||
}
|
||||
|
||||
/**
|
||||
* Update credentials in generation config.
|
||||
* Sets a flag to prevent syncAfterAuthRefresh from overriding these credentials.
|
||||
*
|
||||
* When credentials are manually set, we clear all provider-sourced configuration
|
||||
* to maintain provider atomicity (either fully applied or not at all).
|
||||
* Other layers (CLI, env, settings, defaults) will participate in resolve.
|
||||
*/
|
||||
updateCredentials(credentials: {
|
||||
apiKey?: string;
|
||||
baseUrl?: string;
|
||||
model?: string;
|
||||
}): void {
|
||||
/**
|
||||
* If any fields are updated here, we treat the resulting config as manually overridden
|
||||
* and avoid applying modelProvider defaults during the next auth refresh.
|
||||
*
|
||||
* Clear all provider-sourced configuration to maintain provider atomicity.
|
||||
* This ensures that when user manually sets credentials, the provider config
|
||||
* is either fully applied (via switchModel) or not at all.
|
||||
*/
|
||||
if (credentials.apiKey || credentials.baseUrl || credentials.model) {
|
||||
this.hasManualCredentials = true;
|
||||
this.clearProviderSourcedConfig();
|
||||
}
|
||||
|
||||
if (credentials.apiKey) {
|
||||
this._generationConfig.apiKey = credentials.apiKey;
|
||||
this.generationConfigSources['apiKey'] = {
|
||||
kind: 'programmatic',
|
||||
detail: 'updateCredentials',
|
||||
};
|
||||
}
|
||||
if (credentials.baseUrl) {
|
||||
this._generationConfig.baseUrl = credentials.baseUrl;
|
||||
this.generationConfigSources['baseUrl'] = {
|
||||
kind: 'programmatic',
|
||||
detail: 'updateCredentials',
|
||||
};
|
||||
}
|
||||
if (credentials.model) {
|
||||
this._generationConfig.model = credentials.model;
|
||||
this.generationConfigSources['model'] = {
|
||||
kind: 'programmatic',
|
||||
detail: 'updateCredentials',
|
||||
};
|
||||
}
|
||||
// When credentials are manually set, disable strict model provider selection
|
||||
// so validation doesn't require envKey-based credentials
|
||||
this.strictModelProviderSelection = false;
|
||||
// Clear apiKeyEnvKey to prevent validation from requiring environment variable
|
||||
this._generationConfig.apiKeyEnvKey = undefined;
|
||||
}
|
||||
|
||||
/**
|
||||
* Clear configuration fields that were sourced from modelProviders.
|
||||
* This ensures provider config atomicity when user manually sets credentials.
|
||||
* Other layers (CLI, env, settings, defaults) will participate in resolve.
|
||||
*/
|
||||
private clearProviderSourcedConfig(): void {
|
||||
for (const field of PROVIDER_SOURCED_FIELDS) {
|
||||
const source = this.generationConfigSources[field];
|
||||
if (source?.kind === 'modelProviders') {
|
||||
// Clear the value - let other layers resolve it
|
||||
delete (this._generationConfig as Record<string, unknown>)[field];
|
||||
delete this.generationConfigSources[field];
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get whether strict model provider selection is enabled
|
||||
*/
|
||||
isStrictModelProviderSelection(): boolean {
|
||||
return this.strictModelProviderSelection;
|
||||
}
|
||||
|
||||
/**
|
||||
* Reset strict model provider selection flag
|
||||
*/
|
||||
resetStrictModelProviderSelection(): void {
|
||||
this.strictModelProviderSelection = false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check and consume the one-shot cached credentials flag
|
||||
*/
|
||||
consumeRequireCachedCredentialsFlag(): boolean {
|
||||
const value = this.requireCachedQwenCredentialsOnce;
|
||||
this.requireCachedQwenCredentialsOnce = false;
|
||||
return value;
|
||||
}
|
||||
|
||||
/**
|
||||
* Apply resolved model config to generation config
|
||||
*/
|
||||
private applyResolvedModelDefaults(model: ResolvedModelConfig): void {
|
||||
this.strictModelProviderSelection = true;
|
||||
// We're explicitly applying modelProvider defaults now, so manual overrides
|
||||
// should no longer block syncAfterAuthRefresh from applying provider defaults.
|
||||
this.hasManualCredentials = false;
|
||||
|
||||
this._generationConfig.model = model.id;
|
||||
this.generationConfigSources['model'] = {
|
||||
kind: 'modelProviders',
|
||||
authType: model.authType,
|
||||
modelId: model.id,
|
||||
detail: 'model.id',
|
||||
};
|
||||
|
||||
// Clear credentials to avoid reusing previous model's API key
|
||||
|
||||
// For Qwen OAuth, apiKey must always be a placeholder. It will be dynamically
|
||||
// replaced when building requests. Do not preserve any previous key or read
|
||||
// from envKey.
|
||||
//
|
||||
// (OpenAI client instantiation requires an apiKey even though it will be
|
||||
// replaced later.)
|
||||
if (this.currentAuthType === AuthType.QWEN_OAUTH) {
|
||||
this._generationConfig.apiKey = 'QWEN_OAUTH_DYNAMIC_TOKEN';
|
||||
this.generationConfigSources['apiKey'] = {
|
||||
kind: 'computed',
|
||||
detail: 'Qwen OAuth placeholder token',
|
||||
};
|
||||
this._generationConfig.apiKeyEnvKey = undefined;
|
||||
delete this.generationConfigSources['apiKeyEnvKey'];
|
||||
} else {
|
||||
this._generationConfig.apiKey = undefined;
|
||||
this._generationConfig.apiKeyEnvKey = undefined;
|
||||
}
|
||||
|
||||
// Read API key from environment variable if envKey is specified
|
||||
if (model.envKey !== undefined) {
|
||||
const apiKey = process.env[model.envKey];
|
||||
if (apiKey) {
|
||||
this._generationConfig.apiKey = apiKey;
|
||||
this.generationConfigSources['apiKey'] = {
|
||||
kind: 'env',
|
||||
envKey: model.envKey,
|
||||
via: {
|
||||
kind: 'modelProviders',
|
||||
authType: model.authType,
|
||||
modelId: model.id,
|
||||
detail: 'envKey',
|
||||
},
|
||||
};
|
||||
}
|
||||
this._generationConfig.apiKeyEnvKey = model.envKey;
|
||||
this.generationConfigSources['apiKeyEnvKey'] = {
|
||||
kind: 'modelProviders',
|
||||
authType: model.authType,
|
||||
modelId: model.id,
|
||||
detail: 'envKey',
|
||||
};
|
||||
}
|
||||
|
||||
// Base URL
|
||||
this._generationConfig.baseUrl = model.baseUrl;
|
||||
this.generationConfigSources['baseUrl'] = {
|
||||
kind: 'modelProviders',
|
||||
authType: model.authType,
|
||||
modelId: model.id,
|
||||
detail: 'baseUrl',
|
||||
};
|
||||
|
||||
// Generation config
|
||||
const gc = model.generationConfig;
|
||||
this._generationConfig.samplingParams = { ...(gc.samplingParams || {}) };
|
||||
this.generationConfigSources['samplingParams'] = {
|
||||
kind: 'modelProviders',
|
||||
authType: model.authType,
|
||||
modelId: model.id,
|
||||
detail: 'generationConfig.samplingParams',
|
||||
};
|
||||
|
||||
this._generationConfig.timeout = gc.timeout;
|
||||
this.generationConfigSources['timeout'] = {
|
||||
kind: 'modelProviders',
|
||||
authType: model.authType,
|
||||
modelId: model.id,
|
||||
detail: 'generationConfig.timeout',
|
||||
};
|
||||
|
||||
this._generationConfig.maxRetries = gc.maxRetries;
|
||||
this.generationConfigSources['maxRetries'] = {
|
||||
kind: 'modelProviders',
|
||||
authType: model.authType,
|
||||
modelId: model.id,
|
||||
detail: 'generationConfig.maxRetries',
|
||||
};
|
||||
|
||||
this._generationConfig.disableCacheControl = gc.disableCacheControl;
|
||||
this.generationConfigSources['disableCacheControl'] = {
|
||||
kind: 'modelProviders',
|
||||
authType: model.authType,
|
||||
modelId: model.id,
|
||||
detail: 'generationConfig.disableCacheControl',
|
||||
};
|
||||
|
||||
this._generationConfig.schemaCompliance = gc.schemaCompliance;
|
||||
this.generationConfigSources['schemaCompliance'] = {
|
||||
kind: 'modelProviders',
|
||||
authType: model.authType,
|
||||
modelId: model.id,
|
||||
detail: 'generationConfig.schemaCompliance',
|
||||
};
|
||||
|
||||
this._generationConfig.reasoning = gc.reasoning;
|
||||
this.generationConfigSources['reasoning'] = {
|
||||
kind: 'modelProviders',
|
||||
authType: model.authType,
|
||||
modelId: model.id,
|
||||
detail: 'generationConfig.reasoning',
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if model switch requires ContentGenerator refresh.
|
||||
*
|
||||
* Note: This method is ONLY called by switchModel() for same-authType model switches.
|
||||
* Cross-authType switches use switchModel(authType, modelId), which always requires full refresh.
|
||||
*
|
||||
* When this method is called:
|
||||
* - this.currentAuthType is already the target authType
|
||||
* - We're checking if switching between two models within the SAME authType needs refresh
|
||||
*
|
||||
* Examples:
|
||||
* - Qwen OAuth: coder-model -> vision-model (same authType, hot-update safe)
|
||||
* - OpenAI: model-a -> model-b with same envKey (same authType, hot-update safe)
|
||||
* - OpenAI: gpt-4 -> deepseek-chat with different envKey (same authType, needs refresh)
|
||||
*
|
||||
* Cross-authType scenarios:
|
||||
* - OpenAI -> Qwen OAuth: handled by switchModel(authType, modelId), always refreshes
|
||||
* - Qwen OAuth -> OpenAI: handled by switchModel(authType, modelId), always refreshes
|
||||
*/
|
||||
private checkRequiresRefresh(previousModelId: string): boolean {
|
||||
// Defensive: this method is only called after switchModel() sets currentAuthType,
|
||||
// but keep type safety for any future callsites.
|
||||
const authType = this.currentAuthType;
|
||||
if (!authType) {
|
||||
return true;
|
||||
}
|
||||
|
||||
// For Qwen OAuth, model switches within the same authType can always be hot-updated
|
||||
// (coder-model <-> vision-model don't require ContentGenerator recreation)
|
||||
if (authType === AuthType.QWEN_OAUTH) {
|
||||
return false;
|
||||
}
|
||||
|
||||
// Get previous and current model configs
|
||||
const previousModel = this.modelRegistry.getModel(
|
||||
authType,
|
||||
previousModelId,
|
||||
);
|
||||
const currentModel = this.modelRegistry.getModel(
|
||||
authType,
|
||||
this._generationConfig.model || '',
|
||||
);
|
||||
|
||||
// If either model is not in registry, require refresh to be safe
|
||||
if (!previousModel || !currentModel) {
|
||||
return true;
|
||||
}
|
||||
|
||||
// Check if critical fields changed that require ContentGenerator recreation
|
||||
const criticalFieldsChanged =
|
||||
previousModel.envKey !== currentModel.envKey ||
|
||||
previousModel.baseUrl !== currentModel.baseUrl;
|
||||
|
||||
if (criticalFieldsChanged) {
|
||||
return true;
|
||||
}
|
||||
|
||||
// For other auth types with strict model provider selection,
|
||||
// if no critical fields changed, we can still hot-update
|
||||
// (e.g., switching between two OpenAI models with same envKey and baseUrl)
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Called by Config.refreshAuth to sync state after auth refresh.
|
||||
*
|
||||
* IMPORTANT: If credentials were manually set via updateCredentials(),
|
||||
* we should NOT override them with modelProvider defaults.
|
||||
* This handles the case where user inputs credentials via OpenAIKeyPrompt
|
||||
* after removing environment variables for a previously selected model.
|
||||
*/
|
||||
syncAfterAuthRefresh(authType: AuthType, modelId?: string): void {
|
||||
// Check if we have manually set credentials that should be preserved
|
||||
const preserveManualCredentials = this.hasManualCredentials;
|
||||
|
||||
// If credentials were manually set, don't apply modelProvider defaults
|
||||
// Just update the authType and preserve the manually set credentials
|
||||
if (preserveManualCredentials) {
|
||||
this.strictModelProviderSelection = false;
|
||||
this.currentAuthType = authType;
|
||||
if (modelId) {
|
||||
this._generationConfig.model = modelId;
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
this.strictModelProviderSelection = false;
|
||||
|
||||
if (modelId && this.modelRegistry.hasModel(authType, modelId)) {
|
||||
const resolved = this.modelRegistry.getModel(authType, modelId);
|
||||
if (resolved) {
|
||||
// Ensure applyResolvedModelDefaults can correctly apply authType-specific
|
||||
// behavior (e.g., Qwen OAuth placeholder token) by setting currentAuthType
|
||||
// before applying defaults.
|
||||
this.currentAuthType = authType;
|
||||
this.applyResolvedModelDefaults(resolved);
|
||||
}
|
||||
} else {
|
||||
this.currentAuthType = authType;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Update callback for model changes
|
||||
*/
|
||||
setOnModelChange(callback: OnModelChangeCallback): void {
|
||||
this.onModelChange = callback;
|
||||
}
|
||||
}
|
||||
101
packages/core/src/models/types.ts
Normal file
101
packages/core/src/models/types.ts
Normal file
@@ -0,0 +1,101 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import type {
|
||||
AuthType,
|
||||
ContentGeneratorConfig,
|
||||
} from '../core/contentGenerator.js';
|
||||
|
||||
/**
|
||||
* Model capabilities configuration
|
||||
*/
|
||||
export interface ModelCapabilities {
|
||||
/** Supports image/vision inputs */
|
||||
vision?: boolean;
|
||||
}
|
||||
|
||||
/**
|
||||
* Model-scoped generation configuration.
|
||||
*
|
||||
* Keep this consistent with {@link ContentGeneratorConfig} so modelProviders can
|
||||
* feed directly into content generator resolution without shape conversion.
|
||||
*/
|
||||
export type ModelGenerationConfig = Pick<
|
||||
ContentGeneratorConfig,
|
||||
| 'samplingParams'
|
||||
| 'timeout'
|
||||
| 'maxRetries'
|
||||
| 'disableCacheControl'
|
||||
| 'schemaCompliance'
|
||||
| 'reasoning'
|
||||
>;
|
||||
|
||||
/**
|
||||
* Model configuration for a single model within an authType
|
||||
*/
|
||||
export interface ModelConfig {
|
||||
/** Unique model ID within authType (e.g., "qwen-coder", "gpt-4-turbo") */
|
||||
id: string;
|
||||
/** Display name (defaults to id) */
|
||||
name?: string;
|
||||
/** Model description */
|
||||
description?: string;
|
||||
/** Environment variable name to read API key from (e.g., "OPENAI_API_KEY") */
|
||||
envKey?: string;
|
||||
/** API endpoint override */
|
||||
baseUrl?: string;
|
||||
/** Model capabilities, reserve for future use. Now we do not read this to determine multi-modal support or other capabilities. */
|
||||
capabilities?: ModelCapabilities;
|
||||
/** Generation configuration (sampling parameters) */
|
||||
generationConfig?: ModelGenerationConfig;
|
||||
}
|
||||
|
||||
/**
|
||||
* Model providers configuration grouped by authType
|
||||
*/
|
||||
export type ModelProvidersConfig = {
|
||||
[authType: string]: ModelConfig[];
|
||||
};
|
||||
|
||||
/**
|
||||
* Resolved model config with all defaults applied
|
||||
*/
|
||||
export interface ResolvedModelConfig extends ModelConfig {
|
||||
/** AuthType this model belongs to (always present from map key) */
|
||||
authType: AuthType;
|
||||
/** Display name (always present, defaults to id) */
|
||||
name: string;
|
||||
/** Environment variable name to read API key from (optional, provider-specific) */
|
||||
envKey?: string;
|
||||
/** API base URL (always present, has default per authType) */
|
||||
baseUrl: string;
|
||||
/** Generation config (always present, merged with defaults) */
|
||||
generationConfig: ModelGenerationConfig;
|
||||
/** Capabilities (always present, defaults to {}) */
|
||||
capabilities: ModelCapabilities;
|
||||
}
|
||||
|
||||
/**
|
||||
* Model info for UI display
|
||||
*/
|
||||
export interface AvailableModel {
|
||||
id: string;
|
||||
label: string;
|
||||
description?: string;
|
||||
capabilities?: ModelCapabilities;
|
||||
authType: AuthType;
|
||||
isVision?: boolean;
|
||||
}
|
||||
|
||||
/**
|
||||
* Metadata for model switch operations
|
||||
*/
|
||||
export interface ModelSwitchMetadata {
|
||||
/** Reason for the switch */
|
||||
reason?: string;
|
||||
/** Additional context */
|
||||
context?: string;
|
||||
}
|
||||
@@ -601,8 +601,17 @@ async function authWithQwenDeviceFlow(
|
||||
console.log('Waiting for authorization to complete...\n');
|
||||
};
|
||||
|
||||
// If browser launch is not suppressed, try to open the URL
|
||||
if (!config.isBrowserLaunchSuppressed()) {
|
||||
// Always show the fallback message in non-interactive environments to ensure
|
||||
// users can see the authorization URL even if browser launching is attempted.
|
||||
// This is critical for headless/remote environments where browser launching
|
||||
// may silently fail without throwing an error.
|
||||
if (config.isBrowserLaunchSuppressed()) {
|
||||
// Browser launch is suppressed, show fallback message
|
||||
showFallbackMessage();
|
||||
} else {
|
||||
// Try to open the URL in browser, but always show the URL as fallback
|
||||
// to handle cases where browser launch silently fails (e.g., headless servers)
|
||||
showFallbackMessage();
|
||||
try {
|
||||
const childProcess = await open(deviceAuth.verification_uri_complete);
|
||||
|
||||
@@ -611,19 +620,19 @@ async function authWithQwenDeviceFlow(
|
||||
// in a minimal Docker container), it will emit an unhandled 'error' event,
|
||||
// causing the entire Node.js process to crash.
|
||||
if (childProcess) {
|
||||
childProcess.on('error', () => {
|
||||
childProcess.on('error', (err) => {
|
||||
console.debug(
|
||||
'Failed to open browser. Visit this URL to authorize:',
|
||||
'Browser launch failed:',
|
||||
err.message || 'Unknown error',
|
||||
);
|
||||
showFallbackMessage();
|
||||
});
|
||||
}
|
||||
} catch (_err) {
|
||||
showFallbackMessage();
|
||||
} catch (err) {
|
||||
console.debug(
|
||||
'Failed to open browser:',
|
||||
err instanceof Error ? err.message : 'Unknown error',
|
||||
);
|
||||
}
|
||||
} else {
|
||||
// Browser launch is suppressed, show fallback message
|
||||
showFallbackMessage();
|
||||
}
|
||||
|
||||
// Emit auth progress event
|
||||
|
||||
@@ -22,10 +22,11 @@ import {
|
||||
type Mock,
|
||||
} from 'vitest';
|
||||
import { Config, type ConfigParameters } from '../config/config.js';
|
||||
import { DEFAULT_GEMINI_MODEL } from '../config/models.js';
|
||||
import { DEFAULT_QWEN_MODEL } from '../config/models.js';
|
||||
import {
|
||||
createContentGenerator,
|
||||
createContentGeneratorConfig,
|
||||
resolveContentGeneratorConfigWithSources,
|
||||
AuthType,
|
||||
} from '../core/contentGenerator.js';
|
||||
import { GeminiChat } from '../core/geminiChat.js';
|
||||
@@ -42,7 +43,33 @@ import type {
|
||||
import { SubagentTerminateMode } from './types.js';
|
||||
|
||||
vi.mock('../core/geminiChat.js');
|
||||
vi.mock('../core/contentGenerator.js');
|
||||
vi.mock('../core/contentGenerator.js', async (importOriginal) => {
|
||||
const actual =
|
||||
await importOriginal<typeof import('../core/contentGenerator.js')>();
|
||||
const { DEFAULT_QWEN_MODEL } = await import('../config/models.js');
|
||||
return {
|
||||
...actual,
|
||||
createContentGenerator: vi.fn().mockResolvedValue({
|
||||
generateContent: vi.fn(),
|
||||
generateContentStream: vi.fn(),
|
||||
countTokens: vi.fn().mockResolvedValue({ totalTokens: 100 }),
|
||||
embedContent: vi.fn(),
|
||||
useSummarizedThinking: vi.fn().mockReturnValue(false),
|
||||
}),
|
||||
createContentGeneratorConfig: vi.fn().mockReturnValue({
|
||||
model: DEFAULT_QWEN_MODEL,
|
||||
authType: actual.AuthType.USE_GEMINI,
|
||||
}),
|
||||
resolveContentGeneratorConfigWithSources: vi.fn().mockReturnValue({
|
||||
config: {
|
||||
model: DEFAULT_QWEN_MODEL,
|
||||
authType: actual.AuthType.USE_GEMINI,
|
||||
apiKey: 'test-api-key',
|
||||
},
|
||||
sources: {},
|
||||
}),
|
||||
};
|
||||
});
|
||||
vi.mock('../utils/environmentContext.js', () => ({
|
||||
getEnvironmentContext: vi.fn().mockResolvedValue([{ text: 'Env Context' }]),
|
||||
getInitialChatHistory: vi.fn(async (_config, extraHistory) => [
|
||||
@@ -65,7 +92,7 @@ async function createMockConfig(
|
||||
toolRegistryMocks = {},
|
||||
): Promise<{ config: Config; toolRegistry: ToolRegistry }> {
|
||||
const configParams: ConfigParameters = {
|
||||
model: DEFAULT_GEMINI_MODEL,
|
||||
model: DEFAULT_QWEN_MODEL,
|
||||
targetDir: '.',
|
||||
debugMode: false,
|
||||
cwd: process.cwd(),
|
||||
@@ -89,7 +116,7 @@ async function createMockConfig(
|
||||
|
||||
// Mock getContentGeneratorConfig to return a valid config
|
||||
vi.spyOn(config, 'getContentGeneratorConfig').mockReturnValue({
|
||||
model: DEFAULT_GEMINI_MODEL,
|
||||
model: DEFAULT_QWEN_MODEL,
|
||||
authType: AuthType.USE_GEMINI,
|
||||
});
|
||||
|
||||
@@ -192,9 +219,17 @@ describe('subagent.ts', () => {
|
||||
// eslint-disable-next-line @typescript-eslint/no-explicit-any
|
||||
} as any);
|
||||
vi.mocked(createContentGeneratorConfig).mockReturnValue({
|
||||
model: DEFAULT_GEMINI_MODEL,
|
||||
model: DEFAULT_QWEN_MODEL,
|
||||
authType: undefined,
|
||||
});
|
||||
vi.mocked(resolveContentGeneratorConfigWithSources).mockReturnValue({
|
||||
config: {
|
||||
model: DEFAULT_QWEN_MODEL,
|
||||
authType: AuthType.USE_GEMINI,
|
||||
apiKey: 'test-api-key',
|
||||
},
|
||||
sources: {},
|
||||
});
|
||||
|
||||
mockSendMessageStream = vi.fn();
|
||||
vi.mocked(GeminiChat).mockImplementation(
|
||||
|
||||
141
packages/core/src/utils/configResolver.test.ts
Normal file
141
packages/core/src/utils/configResolver.test.ts
Normal file
@@ -0,0 +1,141 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { describe, it, expect } from 'vitest';
|
||||
import {
|
||||
resolveField,
|
||||
resolveOptionalField,
|
||||
layer,
|
||||
envLayer,
|
||||
cliSource,
|
||||
settingsSource,
|
||||
defaultSource,
|
||||
} from './configResolver.js';
|
||||
|
||||
describe('configResolver', () => {
|
||||
describe('resolveField', () => {
|
||||
it('returns first present value from layers', () => {
|
||||
const result = resolveField(
|
||||
[
|
||||
layer(undefined, cliSource('--model')),
|
||||
envLayer({ MODEL: 'from-env' }, 'MODEL'),
|
||||
layer('from-settings', settingsSource('model.name')),
|
||||
],
|
||||
'default-model',
|
||||
);
|
||||
|
||||
expect(result.value).toBe('from-env');
|
||||
expect(result.source).toEqual({ kind: 'env', envKey: 'MODEL' });
|
||||
});
|
||||
|
||||
it('returns default when all layers are undefined', () => {
|
||||
const result = resolveField(
|
||||
[layer(undefined, cliSource('--model')), envLayer({}, 'MODEL')],
|
||||
'default-model',
|
||||
defaultSource('default-model'),
|
||||
);
|
||||
|
||||
expect(result.value).toBe('default-model');
|
||||
expect(result.source).toEqual({
|
||||
kind: 'default',
|
||||
detail: 'default-model',
|
||||
});
|
||||
});
|
||||
|
||||
it('respects layer priority order', () => {
|
||||
const result = resolveField(
|
||||
[
|
||||
layer('cli-value', cliSource('--model')),
|
||||
envLayer({ MODEL: 'env-value' }, 'MODEL'),
|
||||
layer('settings-value', settingsSource('model.name')),
|
||||
],
|
||||
'default',
|
||||
);
|
||||
|
||||
expect(result.value).toBe('cli-value');
|
||||
expect(result.source.kind).toBe('cli');
|
||||
});
|
||||
|
||||
it('skips empty strings', () => {
|
||||
const result = resolveField(
|
||||
[
|
||||
layer('', cliSource('--model')),
|
||||
envLayer({ MODEL: 'env-value' }, 'MODEL'),
|
||||
],
|
||||
'default',
|
||||
);
|
||||
|
||||
expect(result.value).toBe('env-value');
|
||||
});
|
||||
});
|
||||
|
||||
describe('resolveOptionalField', () => {
|
||||
it('returns undefined when no value present', () => {
|
||||
const result = resolveOptionalField([
|
||||
layer(undefined, cliSource('--key')),
|
||||
envLayer({}, 'KEY'),
|
||||
]);
|
||||
|
||||
expect(result).toBeUndefined();
|
||||
});
|
||||
|
||||
it('returns first present value', () => {
|
||||
const result = resolveOptionalField([
|
||||
layer(undefined, cliSource('--key')),
|
||||
envLayer({ KEY: 'found' }, 'KEY'),
|
||||
]);
|
||||
|
||||
expect(result).toBeDefined();
|
||||
expect(result!.value).toBe('found');
|
||||
expect(result!.source.kind).toBe('env');
|
||||
});
|
||||
});
|
||||
|
||||
describe('envLayer', () => {
|
||||
it('creates layer from environment variable', () => {
|
||||
const env = { MY_VAR: 'my-value' };
|
||||
const result = envLayer(env, 'MY_VAR');
|
||||
|
||||
expect(result.value).toBe('my-value');
|
||||
expect(result.source).toEqual({ kind: 'env', envKey: 'MY_VAR' });
|
||||
});
|
||||
|
||||
it('handles missing environment variable', () => {
|
||||
const env = {};
|
||||
const result = envLayer(env, 'MISSING_VAR');
|
||||
|
||||
expect(result.value).toBeUndefined();
|
||||
expect(result.source).toEqual({ kind: 'env', envKey: 'MISSING_VAR' });
|
||||
});
|
||||
|
||||
it('supports transform function', () => {
|
||||
const env = { PORT: '3000' };
|
||||
const result = envLayer(env, 'PORT', (v) => parseInt(v, 10));
|
||||
|
||||
expect(result.value).toBe(3000);
|
||||
});
|
||||
});
|
||||
|
||||
describe('source factory functions', () => {
|
||||
it('creates CLI source', () => {
|
||||
expect(cliSource('--model')).toEqual({ kind: 'cli', detail: '--model' });
|
||||
});
|
||||
|
||||
it('creates settings source', () => {
|
||||
expect(settingsSource('model.name')).toEqual({
|
||||
kind: 'settings',
|
||||
settingsPath: 'model.name',
|
||||
});
|
||||
});
|
||||
|
||||
it('creates default source', () => {
|
||||
expect(defaultSource('my-default')).toEqual({
|
||||
kind: 'default',
|
||||
detail: 'my-default',
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
222
packages/core/src/utils/configResolver.ts
Normal file
222
packages/core/src/utils/configResolver.ts
Normal file
@@ -0,0 +1,222 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Qwen Team
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
/**
|
||||
* Generic multi-source configuration resolver utilities.
|
||||
*
|
||||
* This module provides reusable tools for resolving configuration values
|
||||
* from multiple sources (CLI, env, settings, etc.) with priority ordering
|
||||
* and source tracking.
|
||||
*/
|
||||
|
||||
/**
|
||||
* Known source kinds for configuration values.
|
||||
* Extensible for domain-specific needs.
|
||||
*/
|
||||
export type ConfigSourceKind =
|
||||
| 'cli'
|
||||
| 'env'
|
||||
| 'settings'
|
||||
| 'modelProviders'
|
||||
| 'default'
|
||||
| 'computed'
|
||||
| 'programmatic'
|
||||
| 'unknown';
|
||||
|
||||
/**
|
||||
* Source metadata for a configuration value.
|
||||
* Tracks where the value came from for debugging and UI display.
|
||||
*/
|
||||
export interface ConfigSource {
|
||||
/** The kind/category of the source */
|
||||
kind: ConfigSourceKind;
|
||||
/** Additional detail about the source (e.g., '--model' for CLI) */
|
||||
detail?: string;
|
||||
/** Environment variable key if kind is 'env' */
|
||||
envKey?: string;
|
||||
/** Settings path if kind is 'settings' (e.g., 'model.name') */
|
||||
settingsPath?: string;
|
||||
/** Auth type if relevant (for modelProviders) */
|
||||
authType?: string;
|
||||
/** Model ID if relevant (for modelProviders) */
|
||||
modelId?: string;
|
||||
/** Indirect source - when a value is derived via another source */
|
||||
via?: Omit<ConfigSource, 'via'>;
|
||||
}
|
||||
|
||||
/**
|
||||
* Map of field names to their sources
|
||||
*/
|
||||
export type ConfigSources = Record<string, ConfigSource>;
|
||||
|
||||
/**
|
||||
* A configuration layer represents a potential source for a value.
|
||||
* Layers are evaluated in priority order (first non-undefined wins).
|
||||
*/
|
||||
export interface ConfigLayer<T> {
|
||||
/** The value from this layer (undefined means not present) */
|
||||
value: T | undefined;
|
||||
/** Source metadata for this layer */
|
||||
source: ConfigSource;
|
||||
}
|
||||
|
||||
/**
|
||||
* Result of resolving a single field
|
||||
*/
|
||||
export interface ResolvedField<T> {
|
||||
/** The resolved value */
|
||||
value: T;
|
||||
/** Source metadata indicating where the value came from */
|
||||
source: ConfigSource;
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve a single configuration field from multiple layers.
|
||||
*
|
||||
* Layers are evaluated in order. The first layer with a defined,
|
||||
* non-empty value wins. If no layer has a value, the default is used.
|
||||
*
|
||||
* @param layers - Configuration layers in priority order (highest first)
|
||||
* @param defaultValue - Default value if no layer provides one
|
||||
* @param defaultSource - Source metadata for the default value
|
||||
* @returns The resolved value and its source
|
||||
*
|
||||
* @example
|
||||
* ```typescript
|
||||
* const model = resolveField(
|
||||
* [
|
||||
* { value: argv.model, source: { kind: 'cli', detail: '--model' } },
|
||||
* { value: env['OPENAI_MODEL'], source: { kind: 'env', envKey: 'OPENAI_MODEL' } },
|
||||
* { value: settings.model, source: { kind: 'settings', settingsPath: 'model.name' } },
|
||||
* ],
|
||||
* 'default-model',
|
||||
* { kind: 'default', detail: 'default-model' }
|
||||
* );
|
||||
* ```
|
||||
*/
|
||||
export function resolveField<T>(
|
||||
layers: Array<ConfigLayer<T>>,
|
||||
defaultValue: T,
|
||||
defaultSource: ConfigSource = { kind: 'default' },
|
||||
): ResolvedField<T> {
|
||||
for (const layer of layers) {
|
||||
if (isValuePresent(layer.value)) {
|
||||
return { value: layer.value, source: layer.source };
|
||||
}
|
||||
}
|
||||
return { value: defaultValue, source: defaultSource };
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve a field that may not have a default (optional field).
|
||||
*
|
||||
* @param layers - Configuration layers in priority order
|
||||
* @returns The resolved value and source, or undefined if not found
|
||||
*/
|
||||
export function resolveOptionalField<T>(
|
||||
layers: Array<ConfigLayer<T>>,
|
||||
): ResolvedField<T> | undefined {
|
||||
for (const layer of layers) {
|
||||
if (isValuePresent(layer.value)) {
|
||||
return { value: layer.value, source: layer.source };
|
||||
}
|
||||
}
|
||||
return undefined;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a value is "present" (not undefined, not null, not empty string).
|
||||
*
|
||||
* @param value - The value to check
|
||||
* @returns true if the value should be considered present
|
||||
*/
|
||||
function isValuePresent<T>(value: T | undefined | null): value is T {
|
||||
if (value === undefined || value === null) {
|
||||
return false;
|
||||
}
|
||||
// Treat empty strings as not present
|
||||
if (typeof value === 'string' && value.trim() === '') {
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a CLI source descriptor
|
||||
*/
|
||||
export function cliSource(detail: string): ConfigSource {
|
||||
return { kind: 'cli', detail };
|
||||
}
|
||||
|
||||
/**
|
||||
* Create an environment variable source descriptor
|
||||
*/
|
||||
function envSource(envKey: string): ConfigSource {
|
||||
return { kind: 'env', envKey };
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a settings source descriptor
|
||||
*/
|
||||
export function settingsSource(settingsPath: string): ConfigSource {
|
||||
return { kind: 'settings', settingsPath };
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a modelProviders source descriptor
|
||||
*/
|
||||
export function modelProvidersSource(
|
||||
authType: string,
|
||||
modelId: string,
|
||||
detail?: string,
|
||||
): ConfigSource {
|
||||
return { kind: 'modelProviders', authType, modelId, detail };
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a default value source descriptor
|
||||
*/
|
||||
export function defaultSource(detail?: string): ConfigSource {
|
||||
return { kind: 'default', detail };
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a computed value source descriptor
|
||||
*/
|
||||
export function computedSource(detail?: string): ConfigSource {
|
||||
return { kind: 'computed', detail };
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a layer from an environment variable
|
||||
*/
|
||||
export function envLayer<T = string>(
|
||||
env: Record<string, string | undefined>,
|
||||
key: string,
|
||||
transform?: (value: string) => T,
|
||||
): ConfigLayer<T> {
|
||||
const rawValue = env[key];
|
||||
const value =
|
||||
rawValue !== undefined
|
||||
? transform
|
||||
? transform(rawValue)
|
||||
: (rawValue as unknown as T)
|
||||
: undefined;
|
||||
return {
|
||||
value,
|
||||
source: envSource(key),
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a layer with a static value and source
|
||||
*/
|
||||
export function layer<T>(
|
||||
value: T | undefined,
|
||||
source: ConfigSource,
|
||||
): ConfigLayer<T> {
|
||||
return { value, source };
|
||||
}
|
||||
@@ -36,7 +36,7 @@ interface DiffCommand {
|
||||
args: string[];
|
||||
}
|
||||
|
||||
function commandExists(cmd: string): boolean {
|
||||
export function commandExists(cmd: string): boolean {
|
||||
try {
|
||||
execSync(
|
||||
process.platform === 'win32' ? `where.exe ${cmd}` : `command -v ${cmd}`,
|
||||
@@ -52,7 +52,7 @@ function commandExists(cmd: string): boolean {
|
||||
* Editor command configurations for different platforms.
|
||||
* Each editor can have multiple possible command names, listed in order of preference.
|
||||
*/
|
||||
const editorCommands: Record<
|
||||
export const editorCommands: Record<
|
||||
EditorType,
|
||||
{ win32: string[]; default: string[] }
|
||||
> = {
|
||||
|
||||
@@ -1,75 +0,0 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2025 Google LLC
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
import { describe, it, expect, beforeEach, vi } from 'vitest';
|
||||
import { Config } from '../config/config.js';
|
||||
import fs from 'node:fs';
|
||||
import {
|
||||
setSimulate429,
|
||||
disableSimulationAfterFallback,
|
||||
shouldSimulate429,
|
||||
resetRequestCounter,
|
||||
} from './testUtils.js';
|
||||
import { DEFAULT_GEMINI_FLASH_MODEL } from '../config/models.js';
|
||||
// Import the new types (Assuming this test file is in packages/core/src/utils/)
|
||||
import type { FallbackModelHandler } from '../fallback/types.js';
|
||||
|
||||
vi.mock('node:fs');
|
||||
|
||||
// Update the description to reflect that this tests the retry utility's integration
|
||||
describe('Retry Utility Fallback Integration', () => {
|
||||
let config: Config;
|
||||
|
||||
beforeEach(() => {
|
||||
vi.mocked(fs.existsSync).mockReturnValue(true);
|
||||
vi.mocked(fs.statSync).mockReturnValue({
|
||||
isDirectory: () => true,
|
||||
} as fs.Stats);
|
||||
config = new Config({
|
||||
targetDir: '/test',
|
||||
debugMode: false,
|
||||
cwd: '/test',
|
||||
model: 'gemini-2.5-pro',
|
||||
});
|
||||
|
||||
// Reset simulation state for each test
|
||||
setSimulate429(false);
|
||||
resetRequestCounter();
|
||||
});
|
||||
|
||||
// This test validates the Config's ability to store and execute the handler contract.
|
||||
it('should execute the injected FallbackHandler contract correctly', async () => {
|
||||
// Set up a minimal handler for testing, ensuring it matches the new type.
|
||||
const fallbackHandler: FallbackModelHandler = async () => 'retry';
|
||||
|
||||
// Use the generalized setter
|
||||
config.setFallbackModelHandler(fallbackHandler);
|
||||
|
||||
// Call the handler directly via the config property
|
||||
const result = await config.fallbackModelHandler!(
|
||||
'gemini-2.5-pro',
|
||||
DEFAULT_GEMINI_FLASH_MODEL,
|
||||
);
|
||||
|
||||
// Verify it returns the correct intent
|
||||
expect(result).toBe('retry');
|
||||
});
|
||||
|
||||
// This test validates the test utilities themselves.
|
||||
it('should properly disable simulation state after fallback (Test Utility)', () => {
|
||||
// Enable simulation
|
||||
setSimulate429(true);
|
||||
|
||||
// Verify simulation is enabled
|
||||
expect(shouldSimulate429()).toBe(true);
|
||||
|
||||
// Disable simulation after fallback
|
||||
disableSimulationAfterFallback();
|
||||
|
||||
// Verify simulation is now disabled
|
||||
expect(shouldSimulate429()).toBe(false);
|
||||
});
|
||||
});
|
||||
@@ -8,7 +8,7 @@ import { createHash } from 'node:crypto';
|
||||
import { type Content, Type } from '@google/genai';
|
||||
import { type BaseLlmClient } from '../core/baseLlmClient.js';
|
||||
import { LruCache } from './LruCache.js';
|
||||
import { DEFAULT_GEMINI_FLASH_MODEL } from '../config/models.js';
|
||||
import { DEFAULT_QWEN_FLASH_MODEL } from '../config/models.js';
|
||||
import { promptIdContext } from './promptIdContext.js';
|
||||
|
||||
const MAX_CACHE_SIZE = 50;
|
||||
@@ -149,7 +149,7 @@ export async function FixLLMEditWithInstruction(
|
||||
contents,
|
||||
schema: SearchReplaceEditSchema,
|
||||
abortSignal,
|
||||
model: DEFAULT_GEMINI_FLASH_MODEL,
|
||||
model: DEFAULT_QWEN_FLASH_MODEL,
|
||||
systemInstruction: EDIT_SYS_PROMPT,
|
||||
promptId,
|
||||
maxAttempts: 1,
|
||||
|
||||
@@ -11,7 +11,7 @@ import type {
|
||||
GenerateContentResponse,
|
||||
} from '@google/genai';
|
||||
import type { GeminiClient } from '../core/client.js';
|
||||
import { DEFAULT_GEMINI_FLASH_LITE_MODEL } from '../config/models.js';
|
||||
import { DEFAULT_QWEN_FLASH_MODEL } from '../config/models.js';
|
||||
import { getResponseText, partToString } from './partUtils.js';
|
||||
|
||||
/**
|
||||
@@ -86,7 +86,7 @@ export async function summarizeToolOutput(
|
||||
contents,
|
||||
toolOutputSummarizerConfig,
|
||||
abortSignal,
|
||||
DEFAULT_GEMINI_FLASH_LITE_MODEL,
|
||||
DEFAULT_QWEN_FLASH_MODEL,
|
||||
)) as unknown as GenerateContentResponse;
|
||||
return getResponseText(parsedResponse) || textToSummarize;
|
||||
} catch (error) {
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@qwen-code/qwen-code-test-utils",
|
||||
"version": "0.6.1-nightly.20260108.570ec432",
|
||||
"version": "0.7.0",
|
||||
"private": true,
|
||||
"main": "src/index.ts",
|
||||
"license": "Apache-2.0",
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
"name": "qwen-code-vscode-ide-companion",
|
||||
"displayName": "Qwen Code Companion",
|
||||
"description": "Enable Qwen Code with direct access to your VS Code workspace.",
|
||||
"version": "0.6.1-nightly.20260108.570ec432",
|
||||
"version": "0.7.0",
|
||||
"publisher": "qwenlm",
|
||||
"icon": "assets/icon.png",
|
||||
"repository": {
|
||||
|
||||
@@ -61,7 +61,11 @@
|
||||
/* Truncated content styling */
|
||||
.execute-toolcall-row-content:not(.execute-toolcall-full) {
|
||||
max-height: 60px;
|
||||
mask-image: linear-gradient(to bottom, var(--app-primary-background) 40px, transparent 60px);
|
||||
mask-image: linear-gradient(
|
||||
to bottom,
|
||||
var(--app-primary-background) 40px,
|
||||
transparent 60px
|
||||
);
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
{
|
||||
"generatedAt": "2025-12-24T09:15:59.125Z",
|
||||
"generatedAt": "2026-01-07T14:56:23.662Z",
|
||||
"keys": [
|
||||
" - en-US: English",
|
||||
" - zh-CN: Simplified Chinese",
|
||||
@@ -9,9 +9,9 @@
|
||||
"Approval mode changed to: {{mode}} (saved to {{scope}} settings{{location}})",
|
||||
"Auto-edit mode - Automatically approve file edits",
|
||||
"Available approval modes:",
|
||||
"Change auth (executes the /auth command)",
|
||||
"Chat history is already compressed.",
|
||||
"Clearing terminal and resetting chat.",
|
||||
"Clearing terminal.",
|
||||
"Continue with {{model}}",
|
||||
"Conversation checkpoint '{{tag}}' has been deleted.",
|
||||
"Conversation checkpoint saved with tag: {{tag}}.",
|
||||
"Conversation shared to {{filePath}}",
|
||||
@@ -24,6 +24,7 @@
|
||||
"Failed to change approval mode: {{error}}",
|
||||
"Failed to login. Message: {{message}}",
|
||||
"Failed to save approval mode: {{error}}",
|
||||
"Failed to switch model to '{{modelId}}'.\n\n{{error}}",
|
||||
"Invalid file format. Only .md and .json are supported.",
|
||||
"Invalid language. Available: en-US, zh-CN",
|
||||
"List of saved conversations:",
|
||||
@@ -43,6 +44,7 @@
|
||||
"Persist for this project/workspace",
|
||||
"Persist for this user on this machine",
|
||||
"Plan mode - Analyze only, do not modify files or execute commands",
|
||||
"Pro quota limit reached for {{model}}.",
|
||||
"Qwen OAuth authentication cancelled.",
|
||||
"Qwen OAuth authentication timed out. Please try again.",
|
||||
"Resume a conversation from a checkpoint. Usage: /chat resume <tag>",
|
||||
@@ -54,8 +56,7 @@
|
||||
"Share the current conversation to a markdown or json file. Usage: /chat share <file>",
|
||||
"Usage: /approval-mode <mode> [--session|--user|--project]",
|
||||
"Usage: /language ui [zh-CN|en-US]",
|
||||
"YOLO mode - Automatically approve all tools",
|
||||
"clear the screen and conversation history"
|
||||
"YOLO mode - Automatically approve all tools"
|
||||
],
|
||||
"count": 55
|
||||
"count": 56
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user