Files
qwen-code/docs/users/configuration/settings.md

84 KiB
Raw Blame History

Qwen Code Configuration

API Configuration

Qwen Code supports API access through multiple methods. The primary approach is using API keys via environment variables.

Environment Variables for API Access

API access is typically configured through environment variables which can be set in multiple ways:

  1. In your shell profile (e.g., ~/.bashrc, ~/.zshrc)
  2. In a project's .env file
  3. In a .qwen/.env file for project-specific settings
  4. In ~/.qwen/.env for user-wide settings

Essential API Environment Variables

Variable Description Notes
OPENAI_API_KEY API key for OpenAI or compatible API providers. Set this in your shell profile (e.g., ~/.bashrc, ~/.zshrc) or an .env file.
OPENAI_BASE_URL Base URL for OpenAI or compatible API providers. Set this in your shell profile (e.g., ~/.bashrc, ~/.zshrc) or an .env file.
OPENAI_MODEL Specifies the default OPENAI model to use. Overrides the hardcoded default. Example: export OPENAI_MODEL="qwen3-coder-plus"

API Key Providers

Qwen Code supports API keys from various providers:

Configuration Priority

API configuration follows the same precedence as other settings, which is detailed in the Configuration layers section below.

Note

Note on New Configuration Format: The format of the settings.json file has been updated to a new, more organized structure. The old format will be migrated automatically. Qwen Code offers several ways to configure its behavior, including environment variables, command-line arguments, and settings files. This document outlines the different configuration methods and available settings.

Configuration layers

Configuration is applied in the following order of precedence (lower numbers are overridden by higher numbers):

Level Configuration Source Description
1 Default values Hardcoded defaults within the application
2 System defaults file System-wide default settings that can be overridden by other settings files
3 User settings file Global settings for the current user
4 Project settings file Project-specific settings
5 System settings file System-wide settings that override all other settings files
6 Environment variables System-wide or session-specific variables, potentially loaded from .env files
7 Command-line arguments Values passed when launching the CLI

Settings files

Qwen Code uses JSON settings files for persistent configuration. There are four locations for these files:

File Type Location Scope
System defaults file Linux: /etc/qwen-code/system-defaults.json
Windows: C:\ProgramData\qwen-code\system-defaults.json
macOS: /Library/Application Support/QwenCode/system-defaults.json
The path can be overridden using the QWEN_CODE_SYSTEM_DEFAULTS_PATH environment variable.
Provides a base layer of system-wide default settings. These settings have the lowest precedence and are intended to be overridden by user, project, or system override settings.
User settings file ~/.qwen/settings.json (where ~ is your home directory). Applies to all Qwen Code sessions for the current user.
Project settings file .qwen/settings.json within your project's root directory. Applies only when running Qwen Code from that specific project. Project settings override user settings.
System settings file Linux /etc/qwen-code/settings.json
Windows: C:\ProgramData\qwen-code\settings.json
macOS: /Library/Application Support/QwenCode/settings.json
The path can be overridden using the QWEN_CODE_SYSTEM_SETTINGS_PATH environment variable.
Applies to all Qwen Code sessions on the system, for all users. System settings override user and project settings. May be useful for system administrators at enterprises to have controls over users' Qwen Code setups.

Note

Note on environment variables in settings: String values within your settings.json files can reference environment variables using either $VAR_NAME or ${VAR_NAME} syntax. These variables will be automatically resolved when the settings are loaded. For example, if you have an environment variable MY_API_TOKEN, you could use it in settings.json like this: "apiKey": "$MY_API_TOKEN".

The .qwen directory in your project

In addition to a project settings file, a project's .qwen directory can contain other project-specific files related to Qwen Code's operation, such as:

Available settings in settings.json

Settings are organized into categories. All settings should be placed within their corresponding top-level category object in your settings.json file.

general

Setting Type Description Default
general.preferredEditor string The preferred editor to open files in. undefined
general.vimMode boolean Enable Vim keybindings. false
general.disableAutoUpdate boolean Disable automatic updates. false
general.disableUpdateNag boolean Disable update notification prompts. false
general.checkpointing.enabled boolean Enable session checkpointing for recovery. false

output

Setting Type Description Default Possible Values
output.format string The format of the CLI output. "text" "text", "json"

ui

Setting Type Description Default
ui.theme string The color theme for the UI. See Themes for available options. undefined
ui.customThemes object Custom theme definitions. {}
ui.hideWindowTitle boolean Hide the window title bar. false
ui.hideTips boolean Hide helpful tips in the UI. false
ui.hideBanner boolean Hide the application banner. false
ui.hideFooter boolean Hide the footer from the UI. false
ui.showMemoryUsage boolean Display memory usage information in the UI. false
ui.showLineNumbers boolean Show line numbers in code blocks in the CLI output. true
ui.showCitations boolean Show citations for generated text in the chat. true
enableWelcomeBack boolean Show welcome back dialog when returning to a project with conversation history. When enabled, Qwen Code will automatically detect if you're returning to a project with a previously generated project summary (.qwen/PROJECT_SUMMARY.md) and show a dialog allowing you to continue your previous conversation or start fresh. This feature integrates with the /summary command and quit confirmation dialog. true
ui.accessibility.disableLoadingPhrases boolean Disable loading phrases for accessibility. false
ui.accessibility.screenReader boolean Enables screen reader mode, which adjusts the TUI for better compatibility with screen readers. false
ui.customWittyPhrases array of strings A list of custom phrases to display during loading states. When provided, the CLI will cycle through these phrases instead of the default ones. []

ide

Setting Type Description Default
ide.enabled boolean Enable IDE integration mode. false
ide.hasSeenNudge boolean Whether the user has seen the IDE integration nudge. false

privacy

Setting Type Description Default
privacy.usageStatisticsEnabled boolean Enable collection of usage statistics. true

model

Setting Type Description Default
model.name string The Qwen model to use for conversations. undefined
model.maxSessionTurns number Maximum number of user/model/tool turns to keep in a session. -1 means unlimited. -1
model.summarizeToolOutput object Enables or disables the summarization of tool output. You can specify the token budget for the summarization using the tokenBudget setting. Note: Currently only the run_shell_command tool is supported. For example {"run_shell_command": {"tokenBudget": 2000}} undefined
model.generationConfig object Advanced overrides passed to the underlying content generator. Supports request controls such as timeout, maxRetries, and disableCacheControl, along with fine-tuning knobs under samplingParams (for example temperature, top_p, max_tokens). Leave unset to rely on provider defaults. undefined
model.chatCompression.contextPercentageThreshold number Sets the threshold for chat history compression as a percentage of the model's total token limit. This is a value between 0 and 1 that applies to both automatic compression and the manual /compress command. For example, a value of 0.6 will trigger compression when the chat history exceeds 60% of the token limit. Use 0 to disable compression entirely. 0.7
model.skipNextSpeakerCheck boolean Skip the next speaker check. false
model.skipLoopDetection boolean Disables loop detection checks. Loop detection prevents infinite loops in AI responses but can generate false positives that interrupt legitimate workflows. Enable this option if you experience frequent false positive loop detection interruptions. false
model.skipStartupContext boolean Skips sending the startup workspace context (environment summary and acknowledgement) at the beginning of each session. Enable this if you prefer to provide context manually or want to save tokens on startup. false
model.enableOpenAILogging boolean Enables logging of OpenAI API calls for debugging and analysis. When enabled, API requests and responses are logged to JSON files. false
model.openAILoggingDir string Custom directory path for OpenAI API logs. If not specified, defaults to logs/openai in the current working directory. Supports absolute paths, relative paths (resolved from current working directory), and ~ expansion (home directory). undefined

Example model.generationConfig:

{
  "model": {
    "generationConfig": {
      "timeout": 60000,
      "disableCacheControl": false,
      "samplingParams": {
        "temperature": 0.2,
        "top_p": 0.8,
        "max_tokens": 1024
      }
    }
  }
}

model.openAILoggingDir examples:

  • "~/qwen-logs" - Logs to ~/qwen-logs directory
  • "./custom-logs" - Logs to ./custom-logs relative to current directory
  • "/tmp/openai-logs" - Logs to absolute path /tmp/openai-logs

context

Setting Type Description Default
context.fileName string or array of strings The name of the context file(s). undefined
context.importFormat string The format to use when importing memory. undefined
context.discoveryMaxDirs number Maximum number of directories to search for memory. 200
context.includeDirectories array Additional directories to include in the workspace context. Specifies an array of additional absolute or relative paths to include in the workspace context. Missing directories will be skipped with a warning by default. Paths can use ~ to refer to the user's home directory. This setting can be combined with the --include-directories command-line flag. []
context.loadFromIncludeDirectories boolean Controls the behavior of the /memory refresh command. If set to true, QWEN.md files should be loaded from all directories that are added. If set to false, QWEN.md should only be loaded from the current directory. false
context.fileFiltering.respectGitIgnore boolean Respect .gitignore files when searching. true
context.fileFiltering.respectQwenIgnore boolean Respect .qwenignore files when searching. true
context.fileFiltering.enableRecursiveFileSearch boolean Whether to enable searching recursively for filenames under the current tree when completing @ prefixes in the prompt. true
context.fileFiltering.disableFuzzySearch boolean When true, disables the fuzzy search capabilities when searching for files, which can improve performance on projects with a large number of files. false

Troubleshooting File Search Performance

If you are experiencing performance issues with file searching (e.g., with @ completions), especially in projects with a very large number of files, here are a few things you can try in order of recommendation:

  1. Use .qwenignore: Create a .qwenignore file in your project root to exclude directories that contain a large number of files that you don't need to reference (e.g., build artifacts, logs, node_modules). Reducing the total number of files crawled is the most effective way to improve performance.
  2. Disable Fuzzy Search: If ignoring files is not enough, you can disable fuzzy search by setting disableFuzzySearch to true in your settings.json file. This will use a simpler, non-fuzzy matching algorithm, which can be faster.
  3. Disable Recursive File Search: As a last resort, you can disable recursive file search entirely by setting enableRecursiveFileSearch to false. This will be the fastest option as it avoids a recursive crawl of your project. However, it means you will need to type the full path to files when using @ completions.

tools

Setting Type Description Default Notes
tools.sandbox boolean or string Sandbox execution environment (can be a boolean or a path string). undefined
tools.shell.enableInteractiveShell boolean Use node-pty for an interactive shell experience. Fallback to child_process still applies. false
tools.core array of strings This can be used to restrict the set of built-in tools with an allowlist. You can also specify command-specific restrictions for tools that support it, like the run_shell_command tool. For example, "tools.core": ["run_shell_command(ls -l)"] will only allow the ls -l command to be executed. undefined
tools.exclude array of strings Tool names to exclude from discovery. You can also specify command-specific restrictions for tools that support it, like the run_shell_command tool. For example, "tools.exclude": ["run_shell_command(rm -rf)"] will block the rm -rf command. Security Note: Command-specific restrictions in tools.exclude for run_shell_command are based on simple string matching and can be easily bypassed. This feature is not a security mechanism and should not be relied upon to safely execute untrusted code. It is recommended to use tools.core to explicitly select commands that can be executed. undefined
tools.allowed array of strings A list of tool names that will bypass the confirmation dialog. This is useful for tools that you trust and use frequently. For example, ["run_shell_command(git)", "run_shell_command(npm test)"] will skip the confirmation dialog to run any git and npm test commands. undefined
tools.approvalMode string Sets the default approval mode for tool usage. default Possible values: plan (analyze only, do not modify files or execute commands), default (require approval before file edits or shell commands run), auto-edit (automatically approve file edits), yolo (automatically approve all tool calls)
tools.discoveryCommand string Command to run for tool discovery. undefined
tools.callCommand string Defines a custom shell command for calling a specific tool that was discovered using tools.discoveryCommand. The shell command must meet the following criteria: It must take function name (exactly as in function declaration) as first command line argument. It must read function arguments as JSON on stdin, analogous to functionCall.args. It must return function output as JSON on stdout, analogous to functionResponse.response.content. undefined
tools.useRipgrep boolean Use ripgrep for file content search instead of the fallback implementation. Provides faster search performance. true
tools.useBuiltinRipgrep boolean Use the bundled ripgrep binary. When set to false, the system-level rg command will be used instead. This setting is only effective when tools.useRipgrep is true. true
tools.enableToolOutputTruncation boolean Enable truncation of large tool outputs. true Requires restart: Yes
tools.truncateToolOutputThreshold number Truncate tool output if it is larger than this many characters. Applies to Shell, Grep, Glob, ReadFile and ReadManyFiles tools. 25000 Requires restart: Yes
tools.truncateToolOutputLines number Maximum lines or entries kept when truncating tool output. Applies to Shell, Grep, Glob, ReadFile and ReadManyFiles tools. 1000 Requires restart: Yes
tools.autoAccept boolean Controls whether the CLI automatically accepts and executes tool calls that are considered safe (e.g., read-only operations) without explicit user confirmation. If set to true, the CLI will bypass the confirmation prompt for tools deemed safe. false

mcp

Setting Type Description Default
mcp.serverCommand string Command to start an MCP server. undefined
mcp.allowed array of strings An allowlist of MCP servers to allow. Allows you to specify a list of MCP server names that should be made available to the model. This can be used to restrict the set of MCP servers to connect to. Note that this will be ignored if --allowed-mcp-server-names is set. undefined
mcp.excluded array of strings A denylist of MCP servers to exclude. A server listed in both mcp.excluded and mcp.allowed is excluded. Note that this will be ignored if --allowed-mcp-server-names is set. undefined

Note

Security Note for MCP servers: These settings use simple string matching on MCP server names, which can be modified. If you're a system administrator looking to prevent users from bypassing this, consider configuring the mcpServers at the system settings level such that the user will not be able to configure any MCP servers of their own. This should not be used as an airtight security mechanism.

security

Setting Type Description Default
security.folderTrust.enabled boolean Setting to track whether Folder trust is enabled. false
security.auth.selectedType string The currently selected authentication type. undefined
security.auth.enforcedType string The required auth type (useful for enterprises). undefined
security.auth.useExternal boolean Whether to use an external authentication flow. undefined

advanced

Setting Type Description Default
advanced.autoConfigureMemory boolean Automatically configure Node.js memory limits. false
advanced.dnsResolutionOrder string The DNS resolution order. undefined
advanced.excludedEnvVars array of strings Environment variables to exclude from project context. Specifies environment variables that should be excluded from being loaded from project .env files. This prevents project-specific environment variables (like DEBUG=true) from interfering with the CLI behavior. Variables from .qwen/.env files are never excluded. ["DEBUG","DEBUG_MODE"]
advanced.bugCommand object Configuration for the bug report command. Overrides the default URL for the /bug command. Properties: urlTemplate (string): A URL that can contain {title} and {info} placeholders. Example: "bugCommand": { "urlTemplate": "https://bug.example.com/new?title={title}&info={info}" } undefined
advanced.tavilyApiKey string API key for Tavily web search service. Used to enable the web_search tool functionality. undefined

Note

Note about advanced.tavilyApiKey: This is a legacy configuration format. For Qwen OAuth users, DashScope provider is automatically available without any configuration. For other authentication types, configure Tavily or Google providers using the new webSearch configuration format.

mcpServers

Configures connections to one or more Model-Context Protocol (MCP) servers for discovering and using custom tools. Qwen Code attempts to connect to each configured MCP server to discover available tools. If multiple MCP servers expose a tool with the same name, the tool names will be prefixed with the server alias you defined in the configuration (e.g., serverAlias__actualToolName) to avoid conflicts. Note that the system might strip certain schema properties from MCP tool definitions for compatibility. At least one of command, url, or httpUrl must be provided. If multiple are specified, the order of precedence is httpUrl, then url, then command.

Property Type Description Optional
mcpServers.<SERVER_NAME>.command string The command to execute to start the MCP server via standard I/O. Yes
mcpServers.<SERVER_NAME>.args array of strings Arguments to pass to the command. Yes
mcpServers.<SERVER_NAME>.env object Environment variables to set for the server process. Yes
mcpServers.<SERVER_NAME>.cwd string The working directory in which to start the server. Yes
mcpServers.<SERVER_NAME>.url string The URL of an MCP server that uses Server-Sent Events (SSE) for communication. Yes
mcpServers.<SERVER_NAME>.httpUrl string The URL of an MCP server that uses streamable HTTP for communication. Yes
mcpServers.<SERVER_NAME>.headers object A map of HTTP headers to send with requests to url or httpUrl. Yes
mcpServers.<SERVER_NAME>.timeout number Timeout in milliseconds for requests to this MCP server. Yes
mcpServers.<SERVER_NAME>.trust boolean Trust this server and bypass all tool call confirmations. Yes
mcpServers.<SERVER_NAME>.description string A brief description of the server, which may be used for display purposes. Yes
mcpServers.<SERVER_NAME>.includeTools array of strings List of tool names to include from this MCP server. When specified, only the tools listed here will be available from this server (allowlist behavior). If not specified, all tools from the server are enabled by default. Yes
mcpServers.<SERVER_NAME>.excludeTools array of strings List of tool names to exclude from this MCP server. Tools listed here will not be available to the model, even if they are exposed by the server. Note: excludeTools takes precedence over includeTools - if a tool is in both lists, it will be excluded. Yes

telemetry

Configures logging and metrics collection for Qwen Code. For more information, see telemetry.

Setting Type Description Default
telemetry.enabled boolean Whether or not telemetry is enabled.
telemetry.target string The destination for collected telemetry. Supported values are local and gcp.
telemetry.otlpEndpoint string The endpoint for the OTLP Exporter.
telemetry.otlpProtocol string The protocol for the OTLP Exporter (grpc or http).
telemetry.logPrompts boolean Whether or not to include the content of user prompts in the logs.
telemetry.outfile string The file to write telemetry to when target is local.
telemetry.useCollector boolean Whether to use an external OTLP collector.

Example settings.json

Here is an example of a settings.json file with the nested structure, new as of v0.3.0:

{
  "general": {
    "vimMode": true,
    "preferredEditor": "code"
  },
  "ui": {
    "theme": "GitHub",
    "hideBanner": true,
    "hideTips": false,
    "customWittyPhrases": [
      "You forget a thousand things every day. Make sure this is one of 'em",
      "Connecting to AGI"
    ]
  },
  "tools": {
    "approvalMode": "yolo",
    "sandbox": "docker",
    "discoveryCommand": "bin/get_tools",
    "callCommand": "bin/call_tool",
    "exclude": ["write_file"]
  },
  "mcpServers": {
    "mainServer": {
      "command": "bin/mcp_server.py"
    },
    "anotherServer": {
      "command": "node",
      "args": ["mcp_server.js", "--verbose"]
    }
  },
  "telemetry": {
    "enabled": true,
    "target": "local",
    "otlpEndpoint": "http://localhost:4317",
    "logPrompts": true
  },
  "privacy": {
    "usageStatisticsEnabled": true
  },
  "model": {
    "name": "qwen3-coder-plus",
    "maxSessionTurns": 10,
    "enableOpenAILogging": false,
    "openAILoggingDir": "~/qwen-logs",
    "summarizeToolOutput": {
      "run_shell_command": {
        "tokenBudget": 100
      }
    }
  },
  "context": {
    "fileName": ["CONTEXT.md", "QWEN.md"],
    "includeDirectories": ["path/to/dir1", "~/path/to/dir2", "../path/to/dir3"],
    "loadFromIncludeDirectories": true,
    "fileFiltering": {
      "respectGitIgnore": false
    }
  },
  "advanced": {
    "excludedEnvVars": ["DEBUG", "DEBUG_MODE", "NODE_ENV"]
  }
}

Shell History

The CLI keeps a history of shell commands you run. To avoid conflicts between different projects, this history is stored in a project-specific directory within your user's home folder.

  • Location: ~/.qwen/tmp/<project_hash>/shell_history
    • <project_hash> is a unique identifier generated from your project's root path.
    • The history is stored in a file named shell_history.

Environment Variables & .env Files

Environment variables are a common way to configure applications, especially for sensitive information like API keys or for settings that might change between environments.

The CLI automatically loads environment variables from an .env file. The loading order is:

  1. .env file in the current working directory.
  2. If not found, it searches upwards in parent directories until it finds an .env file or reaches the project root (identified by a .git folder) or the home directory.
  3. If still not found, it looks for ~/.env (in the user's home directory).

Tip

Environment Variable Exclusion: Some environment variables (like DEBUG and DEBUG_MODE) are automatically excluded from project .env files by default to prevent interference with the CLI behavior. Variables from .qwen/.env files are never excluded. You can customize this behavior using the advanced.excludedEnvVarssetting in your settings.json file.

Environment Variables Table

Variable Description Notes
OPENAI_API_KEY API key for OpenAI or compatible API providers. Set this in your shell profile (e.g., ~/.bashrc, ~/.zshrc) or an .env file.
OPENAI_BASE_URL Base URL for OpenAI or compatible API providers. Set this in your shell profile (e.g., ~/.bashrc, ~/.zshrc) or an .env file.
OPENAI_MODEL Specifies the default OPENAI model to use. Overrides the hardcoded default. Example: export OPENAI_MODEL="qwen3-coder-plus"
GEMINI_TELEMETRY_ENABLED Set to true or 1 to enable telemetry. Any other value is treated as disabling it. Overrides the telemetry.enabled setting.
GEMINI_TELEMETRY_TARGET Sets the telemetry target (local or gcp). Overrides the telemetry.target setting.
GEMINI_TELEMETRY_OTLP_ENDPOINT Sets the OTLP endpoint for telemetry. Overrides the telemetry.otlpEndpoint setting.
GEMINI_TELEMETRY_OTLP_PROTOCOL Sets the OTLP protocol (grpc or http). Overrides the telemetry.otlpProtocol setting.
GEMINI_TELEMETRY_LOG_PROMPTS Set to true or 1 to enable or disable logging of user prompts. Any other value is treated as disabling it. Overrides the telemetry.logPrompts setting.
GEMINI_TELEMETRY_OUTFILE Sets the file path to write telemetry to when the target is local. Overrides the telemetry.outfile setting.
GEMINI_TELEMETRY_USE_COLLECTOR Set to true or 1 to enable or disable using an external OTLP collector. Any other value is treated as disabling it. Overrides the telemetry.useCollector setting.
GEMINI_SANDBOX Alternative to the sandbox setting in settings.json. Accepts true, false, docker, podman, or a custom command string.
SEATBELT_PROFILE (macOS specific) Switches the Seatbelt (sandbox-exec) profile on macOS. permissive-open: (Default) Restricts writes to the project folder (and a few other folders, see packages/cli/src/utils/sandbox-macos-permissive-open.sb) but allows other operations. strict: Uses a strict profile that declines operations by default. <profile_name>: Uses a custom profile. To define a custom profile, create a file named sandbox-macos-<profile_name>.sb in your project's .qwen/ directory (e.g., my-project/.qwen/sandbox-macos-custom.sb).
DEBUG or DEBUG_MODE (often used by underlying libraries or the CLI itself) Set to true or 1 to enable verbose debug logging, which can be helpful for troubleshooting. Note: These variables are automatically excluded from project .env files by default to prevent interference with the CLI behavior. Use .qwen/.env files if you need to set these for Qwen Code specifically.
NO_COLOR Set to any value to disable all color output in the CLI.
CLI_TITLE Set to a string to customize the title of the CLI.
CODE_ASSIST_ENDPOINT Specifies the endpoint for the code assist server. This is useful for development and testing.
TAVILY_API_KEY Your API key for the Tavily web search service. Used to enable the web_search tool functionality. Example: export TAVILY_API_KEY="tvly-your-api-key-here"

Command-Line Arguments

Arguments passed directly when running the CLI can override other configurations for that specific session.

Command-Line Arguments Table

Argument Alias Description Possible Values Notes
--model -m Specifies the Qwen model to use for this session. Model name Example: npm start -- --model qwen3-coder-plus
--prompt -p Used to pass a prompt directly to the command. This invokes Qwen Code in a non-interactive mode. Your prompt text For scripting examples, use the --output-format json flag to get structured output.
--prompt-interactive -i Starts an interactive session with the provided prompt as the initial input. Your prompt text The prompt is processed within the interactive session, not before it. Cannot be used when piping input from stdin. Example: qwen -i "explain this code"
--output-format -o Specifies the format of the CLI output for non-interactive mode. text, json, stream-json text: (Default) The standard human-readable output. json: A machine-readable JSON output emitted at the end of execution. stream-json: Streaming JSON messages emitted as they occur during execution. For structured output and scripting, use the --output-format json or --output-format stream-json flag. See Headless Mode for detailed information.
--input-format Specifies the format consumed from standard input. text, stream-json text: (Default) Standard text input from stdin or command-line arguments. stream-json: JSON message protocol via stdin for bidirectional communication. Requirement: --input-format stream-json requires --output-format stream-json to be set. When using stream-json, stdin is reserved for protocol messages. See Headless Mode for detailed information.
--include-partial-messages Include partial assistant messages when using stream-json output format. When enabled, emits stream events (message_start, content_block_delta, etc.) as they occur during streaming. Default: false. Requirement: Requires --output-format stream-json to be set. See Headless Mode for detailed information about stream events.
--sandbox -s Enables sandbox mode for this session.
--sandbox-image Sets the sandbox image URI.
--debug -d Enables debug mode for this session, providing more verbose output.
--all-files -a If set, recursively includes all files within the current directory as context for the prompt.
--help -h Displays help information about command-line arguments.
--show-memory-usage Displays the current memory usage.
--yolo Enables YOLO mode, which automatically approves all tool calls.
--approval-mode Sets the approval mode for tool calls. plan, default, auto-edit, yolo Supported modes: plan: Analyze only—do not modify files or execute commands. default: Require approval for file edits or shell commands (default behavior). auto-edit: Automatically approve edit tools (edit, write_file) while prompting for others. yolo: Automatically approve all tool calls (equivalent to --yolo). Cannot be used together with --yolo. Use --approval-mode=yolo instead of --yolo for the new unified approach. Example: qwen --approval-mode auto-edit
See more about Approval Mode.
--allowed-tools A comma-separated list of tool names that will bypass the confirmation dialog. Tool names Example: qwen --allowed-tools "Shell(git status)"
--telemetry Enables telemetry.
--telemetry-target Sets the telemetry target. See telemetry for more information.
--telemetry-otlp-endpoint Sets the OTLP endpoint for telemetry. See telemetry for more information.
--telemetry-otlp-protocol Sets the OTLP protocol for telemetry (grpc or http). Defaults to grpc. See telemetry for more information.
--telemetry-log-prompts Enables logging of prompts for telemetry. See telemetry for more information.
--checkpointing Enables checkpointing.
--extensions -e Specifies a list of extensions to use for the session. Extension names If not provided, all available extensions are used. Use the special term qwen -e none to disable all extensions. Example: qwen -e my-extension -e my-other-extension
--list-extensions -l Lists all available extensions and exits.
--proxy Sets the proxy for the CLI. Proxy URL Example: --proxy http://localhost:7890.
--include-directories Includes additional directories in the workspace for multi-directory support. Directory paths Can be specified multiple times or as comma-separated values. 5 directories can be added at maximum. Example: --include-directories /path/to/project1,/path/to/project2 or --include-directories /path/to/project1 --include-directories /path/to/project2
--screen-reader Enables screen reader mode, which adjusts the TUI for better compatibility with screen readers.
--version Displays the version of the CLI.
--openai-logging Enables logging of OpenAI API calls for debugging and analysis. This flag overrides the enableOpenAILogging setting in settings.json.
--openai-logging-dir Sets a custom directory path for OpenAI API logs. Directory path This flag overrides the openAILoggingDir setting in settings.json. Supports absolute paths, relative paths, and ~ expansion. Example: qwen --openai-logging-dir "~/qwen-logs" --openai-logging
--tavily-api-key Sets the Tavily API key for web search functionality for this session. API key Example: qwen --tavily-api-key tvly-your-api-key-here

Context Files (Hierarchical Instructional Context)

While not strictly configuration for the CLI's behavior, context files (defaulting to QWEN.md but configurable via the context.fileName setting) are crucial for configuring the instructional context (also referred to as "memory"). This powerful feature allows you to give project-specific instructions, coding style guides, or any relevant background information to the AI, making its responses more tailored and accurate to your needs. The CLI includes UI elements, such as an indicator in the footer showing the number of loaded context files, to keep you informed about the active context.

  • Purpose: These Markdown files contain instructions, guidelines, or context that you want the Qwen model to be aware of during your interactions. The system is designed to manage this instructional context hierarchically.

Example Context File Content (e.g. QWEN.md)

Here's a conceptual example of what a context file at the root of a TypeScript project might contain:

# Project: My Awesome TypeScript Library

## General Instructions:
- When generating new TypeScript code, please follow the existing coding style.
- Ensure all new functions and classes have JSDoc comments.
- Prefer functional programming paradigms where appropriate.
- All code should be compatible with TypeScript 5.0 and Node.js 20+.

## Coding Style:
- Use 2 spaces for indentation.
- Interface names should be prefixed with `I` (e.g., `IUserService`).
- Private class members should be prefixed with an underscore (`_`).
- Always use strict equality (`===` and `!==`).

## Specific Component: `src/api/client.ts`
- This file handles all outbound API requests.
- When adding new API call functions, ensure they include robust error handling and logging.
- Use the existing `fetchWithRetry` utility for all GET requests.

## Regarding Dependencies:
- Avoid introducing new external dependencies unless absolutely necessary.
- If a new dependency is required, please state the reason.

This example demonstrates how you can provide general project context, specific coding conventions, and even notes about particular files or components. The more relevant and precise your context files are, the better the AI can assist you. Project-specific context files are highly encouraged to establish conventions and context.

  • Hierarchical Loading and Precedence: The CLI implements a sophisticated hierarchical memory system by loading context files (e.g., QWEN.md) from several locations. Content from files lower in this list (more specific) typically overrides or supplements content from files higher up (more general). The exact concatenation order and final context can be inspected using the /memory show command. The typical loading order is:
    1. Global Context File:
      • Location: ~/.qwen/<configured-context-filename> (e.g., ~/.qwen/QWEN.md in your user home directory).
      • Scope: Provides default instructions for all your projects.
    2. Project Root & Ancestors Context Files:
      • Location: The CLI searches for the configured context file in the current working directory and then in each parent directory up to either the project root (identified by a .git folder) or your home directory.
      • Scope: Provides context relevant to the entire project or a significant portion of it.
    3. Sub-directory Context Files (Contextual/Local):
      • Location: The CLI also scans for the configured context file in subdirectories below the current working directory (respecting common ignore patterns like node_modules, .git, etc.). The breadth of this search is limited to 200 directories by default, but can be configured with the context.discoveryMaxDirs setting in your settings.json file.
      • Scope: Allows for highly specific instructions relevant to a particular component, module, or subsection of your project.
  • Concatenation & UI Indication: The contents of all found context files are concatenated (with separators indicating their origin and path) and provided as part of the system prompt. The CLI footer displays the count of loaded context files, giving you a quick visual cue about the active instructional context.
  • Importing Content: You can modularize your context files by importing other Markdown files using the @path/to/file.md syntax. For more details, see the Memory Import Processor documentation.
  • Commands for Memory Management:
    • Use /memory refresh to force a re-scan and reload of all context files from all configured locations. This updates the AI's instructional context.
    • Use /memory show to display the combined instructional context currently loaded, allowing you to verify the hierarchy and content being used by the AI.
    • See the Commands documentation for full details on the /memory command and its sub-commands (show and refresh).

By understanding and utilizing these configuration layers and the hierarchical nature of context files, you can effectively manage the AI's memory and tailor Qwen Code's responses to your specific needs and projects.

Sandbox

Qwen Code can execute potentially unsafe operations (like shell commands and file modifications) within a sandboxed environment to protect your system.

Sandbox is disabled by default, but you can enable it in a few ways:

  • Using --sandbox or -s flag.
  • Setting GEMINI_SANDBOX environment variable.
  • Sandbox is enabled when using --yolo or --approval-mode=yolo by default.

By default, it uses a pre-built qwen-code-sandbox Docker image.

For project-specific sandboxing needs, you can create a custom Dockerfile at .qwen/sandbox.Dockerfile in your project's root directory. This Dockerfile can be based on the base sandbox image:

FROM qwen-code-sandbox
# Add your custom dependencies or configurations here
# For example:
# RUN apt-get update && apt-get install -y some-package
# COPY ./my-config /app/my-config

When .qwen/sandbox.Dockerfile exists, you can use BUILD_SANDBOX environment variable when running Qwen Code to automatically build the custom sandbox image:

BUILD_SANDBOX=1 qwen -s

Usage Statistics

To help us improve Qwen Code, we collect anonymized usage statistics. This data helps us understand how the CLI is used, identify common issues, and prioritize new features.

What we collect:

  • Tool Calls: We log the names of the tools that are called, whether they succeed or fail, and how long they take to execute. We do not collect the arguments passed to the tools or any data returned by them.
  • API Requests: We log the model used for each request, the duration of the request, and whether it was successful. We do not collect the content of the prompts or responses.
  • Session Information: We collect information about the configuration of the CLI, such as the enabled tools and the approval mode.

What we DON'T collect:

  • Personally Identifiable Information (PII): We do not collect any personal information, such as your name, email address, or API keys.
  • Prompt and Response Content: We do not log the content of your prompts or the responses from the model.
  • File Content: We do not log the content of any files that are read or written by the CLI.

How to opt out:

You can opt out of usage statistics collection at any time by setting the usageStatisticsEnabled property to false under the privacy category in your settings.json file:

{
  "privacy": {
    "usageStatisticsEnabled": false
  }
}

Note

When usage statistics are enabled, events are sent to an Alibaba Cloud RUM collection endpoint.