DeepSeek TUI is a terminal-based coding agent powered by DeepSeek V4, designed to assist developers in automating repetitive tasks, conducting research, and managing complex projects directly from the command line. It runs via the deepseek command and offers features such as streaming reasoning blocks, editing local workspaces with approval gates, and an auto mode that selects the appropriate model and thinking level for each task.
Key Features:
Auto Mode: Automatically selects the best model and reasoning level based on the task at hand.
Thinking-Mode Streaming: Displays real-time reasoning blocks as DeepSeek processes the request, providing transparency into the AI's thought process.
Full Tool Suite: Supports file operations, shell commands, git management, web search, and more, enabling seamless integration with existing workflows.
High Context Window: Utilizes a 1M-token context window for deep understanding of long-term dependencies, with options to manually or automatically compact context.
Three Operating Modes: Offers Plan (read-only exploration), Agent (interactive with approval gates), and YOLO (auto-approved actions) modes to suit different use cases.
Audience & Benefit:
Ideal for developers, technical writers, data scientists, and other professionals who rely on coding and AI-driven assistance. DeepSeek TUI enhances productivity by automating repetitive tasks, providing real-time insights, and enabling efficient project management directly from the terminal interface. It can be installed via winget for easy setup.
README
DeepSeek TUI
> Terminal coding agent for DeepSeek V4. It runs from the deepseek command, streams reasoning blocks, edits local workspaces with approval gates, and includes an auto mode that chooses both model and thinking level per turn.
deepseek is distributed as Rust binaries: the dispatcher command
(deepseek) and the companion TUI runtime (deepseek-tui). Pick whichever
install path you already use; they all put the same commands on your PATH.
The npm package is an installer/wrapper for the release binaries, not the
agent runtime itself.
# 1. npm — easiest if you already use Node. The package downloads the
# matching prebuilt Rust binaries from GitHub Releases.
npm install -g deepseek-tui
# 2. Cargo — no Node needed.
cargo install deepseek-tui-cli --locked # `deepseek` (entry point)
cargo install deepseek-tui --locked # `deepseek-tui` (TUI binary)
# 3. Homebrew — macOS package manager.
brew tap Hmbown/deepseek-tui
brew install deepseek-tui
# 4. Direct download — no package manager or toolchain.
# https://github.com/Hmbown/DeepSeek-TUI/releases
# Prebuilt for Linux x64/ARM64, macOS x64/ARM64, Windows x64.
# 5. Docker — prebuilt release image.
docker volume create deepseek-tui-home
docker run --rm -it \
-e DEEPSEEK_API_KEY="$DEEPSEEK_API_KEY" \
-v deepseek-tui-home:/home/deepseek/.deepseek \
-v "$PWD:/workspace" \
-w /workspace \
ghcr.io/hmbown/deepseek-tui:latest
> In mainland China, speed up the npm path with
> --registry=https://registry.npmmirror.com, or use the
> Cargo mirror below.
>
> Download safety: official release binaries live under
> https://github.com/Hmbown/DeepSeek-TUI/releases. For manual downloads,
> verify the SHA-256 manifest and avoid look-alike repositories or search-result
> mirrors. See download safety and checksums.
Already installed? Use the updater that matches the install path:
DeepSeek TUI is a coding agent that runs in your terminal. It can read and edit files, run shell commands, search the web, manage git, and coordinate sub-agents from a keyboard-driven TUI.
It is built around DeepSeek V4 (deepseek-v4-pro / deepseek-v4-flash), including 1M-token context windows, streaming reasoning blocks, and prefix-cache-aware cost reporting.
Key Features
Auto mode — --model auto / /model auto chooses both the model and thinking level for each turn
Thinking-mode streaming — see DeepSeek reasoning blocks as the model works
Full tool suite — file ops, shell execution, git, web search/browse, apply-patch, sub-agents, MCP servers
1M-token context — context tracking, manual or configured compaction, and prefix-cache telemetry
Prefix-cache stability tracking — an optional /statusline footer chip surfaces how stable the cached prefix has been across recent turns so cost-busting edits are visible before they land
Three modes — Plan (read-only explore), Agent (interactive with approval), YOLO (auto-approved)
Reasoning-effort tiers — cycle through off → high → max with Shift + Tab
Session save/resume — checkpoint and resume long-running sessions
Workspace rollback — side-git pre/post-turn snapshots with /restore and revert_turn, without touching your repo's .git
OS-level sandbox — Seatbelt on macOS, Landlock on Linux, Job Objects on Windows; shell commands run with workspace-scoped filesystem access only
Durable task queue — background tasks can survive restarts
HTTP/SSE runtime API — deepseek serve --http for headless agent workflows
MCP protocol — connect to Model Context Protocol servers for extended tooling; please see docs/MCP.md
Native RLM (rlm_open/rlm_eval) — persistent REPL sessions for batched analysis; run cheap deepseek-v4-flash children with bounded helpers like peek, search, chunk, and sub_query_batch
LSP diagnostics — inline error/warning surfacing after every edit via rust-analyzer, pyright, typescript-language-server, gopls, clangd
User memory — optional persistent note file injected into the system prompt for cross-session preferences
Localized UI — en, ja, zh-Hans, pt-BR with auto-detection
Live cost tracking — per-turn and session-level token usage and cost estimates; cache hit/miss breakdown; CNY display when the session locale is zh-Hans
Skills system — composable, installable instruction packs from GitHub; ships with a bundled starter set (skill-creator, mcp-builder, plugin-creator, v4-best-practices, documents, presentations, spreadsheets, pdf, feishu, skill-installer, delegate) so /skills is useful from first launch
Built-in theme picker — Catppuccin, Tokyo Night, Dracula, Gruvbox alongside the original light/dark palettes; switch live with /theme
How It's Wired
deepseek (dispatcher CLI) → deepseek-tui (companion binary) → ratatui interface ↔ async engine ↔ OpenAI-compatible streaming client. Tool calls route through a typed registry (shell, file ops, git, web, sub-agents, MCP, RLM) and results stream back into the transcript. The engine manages session state, turn tracking, the durable task queue, and an LSP subsystem that feeds post-edit diagnostics into the model's context before the next reasoning step.
DeepSeek TUI can dispatch multiple sub-agents that run in parallel — like a concurrent task queue:
Non-blocking launch.agent_open returns immediately. The child gets its own fresh context and tool registry and runs independently. The parent keeps working.
Background execution. Sub-agents execute concurrently (default cap: 10, configurable to 20). The engine manages the pool — no polling loop needed.
Completion notification. When a sub-agent finishes, the runtime delivers a structured `` event with a summary, evidence list, and execution metrics. The parent model reads the summary field and integrates findings.
Bounded result retrieval. Large transcripts are parked behind var_handle references. The model calls handle_read for slices, ranges, or JSONPath projections — keeping the parent context lean.
npm install -g deepseek-tui
deepseek --version
deepseek --model auto
Prebuilt binaries are published for Linux x64, Linux ARM64 (v0.8.8+), macOS x64, macOS ARM64, and Windows x64. For other targets (musl, riscv64, FreeBSD, etc.), see Install from source or docs/INSTALL.md.
On first launch you'll be prompted for your DeepSeek API key. The key is saved to ~/.deepseek/config.toml so it works from any directory without OS credential prompts.
You can also set it ahead of time:
deepseek auth set --provider deepseek # saves to ~/.deepseek/config.toml
deepseek auth status # shows the active credential source
export DEEPSEEK_API_KEY="YOUR_KEY" # env var alternative; use ~/.zshenv for non-interactive shells
deepseek
deepseek doctor # verify setup
If deepseek doctor says the rejected key came from DEEPSEEK_API_KEY, remove
the stale export from your shell startup file, open a fresh shell, or run
deepseek auth set --provider deepseek. Use deepseek auth status to see the
config, keyring, and env-var source state without printing the key. Saved config
keys take precedence over the keyring and environment and are easier to rotate.
> To rotate or remove a saved key: deepseek auth clear --provider deepseek.
Tencent Cloud / CNB Remote-First Path
For an always-on workspace you can control from a phone, use the Tencent-native
path: CNB mirror/source, Tencent Lighthouse HK, a Feishu/Lark long-connection
bridge, and optional EdgeOne for a deliberate public HTTPS edge. The runtime API
stays bound to localhost; EdgeOne is not used to expose /v1/*.
Use deepseek --model auto or /model auto when you want DeepSeek TUI to decide how much model and reasoning power a turn needs.
Auto mode controls two settings together:
Model: deepseek-v4-flash or deepseek-v4-pro
Thinking: off, high, or max
Before the real turn is sent, the app makes a small deepseek-v4-flash routing call with thinking off. That router looks at the latest request and recent context, then selects a concrete model and thinking level for the real request. Short/simple turns can stay on Flash with thinking off; coding, debugging, release work, architecture, security review, or ambiguous multi-step tasks can move up to Pro and/or higher thinking.
auto is local to DeepSeek TUI. The upstream API never receives model: "auto"; it receives the concrete model and thinking setting chosen for that turn. The TUI shows the selected route, and cost tracking is charged against the model that actually ran. If the router call fails or returns an invalid answer, the app falls back to a local heuristic. Sub-agents inherit auto mode unless you assign them an explicit model.
Use a fixed model or fixed thinking level when you want repeatable benchmarking, a strict cost ceiling, or a specific provider/model mapping.
Linux ARM64 (Raspberry Pi, Asahi, Graviton, HarmonyOS PC)
npm i -g deepseek-tui works on glibc-based ARM64 Linux from v0.8.8 onward. You can also download prebuilt binaries from the Releases page and place them side by side on your PATH.
China / Mirror-friendly Installation
If GitHub or npm downloads are slow from mainland China, use a Cargo registry mirror:
Prebuilt binaries can also be downloaded from GitHub Releases. Use DEEPSEEK_TUI_RELEASE_BASE_URL for mirrored release assets.
Windows (Scoop)
Scoop is a Windows package manager. DeepSeek TUI is listed
in Scoop's main bucket, but that manifest updates independently and can lag the
GitHub/npm/Cargo release. Run scoop update first, then verify the installed
version with deepseek --version:
Inside the TUI, /provider opens the provider picker and /model opens the
model picker. /provider openrouter and /model switch directly, while
/models lists live API models. The /model picker uses the active provider's
live model catalog when the provider exposes one, with provider-aware defaults
as a fallback.
Release Notes
Release-specific changes live in CHANGELOG.md. This README
stays focused on current install paths, core workflows, provider setup, runtime
interfaces, and extension points.
Usage
deepseek # interactive TUI
deepseek "explain this function" # one-shot prompt
deepseek exec --auto --output-format stream-json "fix this bug" # NDJSON backend stream
deepseek exec --resume "follow up" # continue a non-interactive session
deepseek --model deepseek-v4-flash "summarize" # model override
deepseek --model auto "fix this bug" # auto-select model + thinking
deepseek --yolo # auto-approve tools
deepseek auth set --provider deepseek # save API key
deepseek doctor # check setup & connectivity
deepseek doctor --json # machine-readable diagnostics
deepseek setup --status # read-only setup status
deepseek setup --tools --plugins # scaffold tool/plugin dirs
deepseek models # list live API models
deepseek sessions # list saved sessions
deepseek resume --last # resume the most recent session in this workspace
deepseek resume # resume a specific session by UUID
deepseek fork # fork a session at a chosen turn
deepseek serve --http # HTTP/SSE API server
deepseek serve --acp # ACP stdio adapter for Zed/custom agents
deepseek run pr # fetch PR and pre-seed review prompt
deepseek mcp list # list configured MCP servers
deepseek mcp validate # validate MCP config/connectivity
deepseek mcp-server # run dispatcher MCP stdio server
deepseek update # check for and apply binary updates
Docker images are published to GHCR for release builds:
The first ACP slice supports new sessions and prompt responses through your
existing DeepSeek config/API key. Tool-backed editing and checkpoint replay are
not exposed through ACP yet.
Community-maintained adapter: acp-deepseek-adapter
bridges deepseek exec --auto to cc-connect for users who need tool-backed
ACP workflows outside the built-in Zed slice.
Keyboard Shortcuts
Key
Action
Tab
Complete / or @ entries; while running, queue draft as follow-up; otherwise cycle mode
Shift+Tab
Cycle reasoning-effort: off → high → max
F1
Searchable help overlay
Esc
Back / dismiss
Ctrl+K
Command palette
Ctrl+R
Resume an earlier session
Alt+R
Search prompt history and recover cleared drafts
Ctrl+S
Stash current draft (/stash list, /stash pop to recover)
Read-only investigation — model explores and proposes a plan before making changes; multi-step investigations use checklist_write
Agent 🤖
Default interactive mode — multi-step tool use with approval gates; substantial work is tracked with checklist_write
YOLO ⚡
Auto-approve all tools in a trusted workspace; multi-step work still keeps a visible checklist
Configuration
User config: ~/.deepseek/config.toml. Project overlay: /.deepseek/config.toml (denied: api_key, base_url, provider, mcp_config_path). config.example.toml has every option.
Key environment variables:
Variable
Purpose
DEEPSEEK_API_KEY
API key
DEEPSEEK_BASE_URL
API base URL
DEEPSEEK_HTTP_HEADERS
Optional custom model request headers, e.g. X-Model-Provider-Id=your-model-provider
DEEPSEEK_MODEL
Default model
DEEPSEEK_STREAM_IDLE_TIMEOUT_SECS
Stream idle timeout in seconds, default 300, clamped to 1..=3600
Set locale in settings.toml, use /config locale zh-Hans, or rely on LC_ALL/LANG to choose UI chrome and the fallback language sent to V4 models. The latest user message still wins for natural-language reasoning and replies, so Chinese user turns stay Chinese even on an English system locale. See docs/CONFIGURATION.md and docs/MCP.md.
Models & Pricing
Model
Context
Input (cache hit)
Input (cache miss)
Output
deepseek-v4-pro
1M
$0.003625 / 1M*
$0.435 / 1M*
$0.87 / 1M*
deepseek-v4-flash
1M
$0.0028 / 1M
$0.14 / 1M
$0.28 / 1M
DeepSeek Platform defaults to https://api.deepseek.com/beta so beta-gated API features can be tested without extra setup. Set base_url = "https://api.deepseek.com" to opt out.
Legacy aliases deepseek-chat / deepseek-reasoner map to deepseek-v4-flash and retire after July 24, 2026. NVIDIA NIM variants use your NVIDIA account terms.
DeepSeek Pro rates currently reflect a limited-time 75% discount, which remains valid until 15:59 UTC on 31 May 2026. After that time, the TUI cost estimator will revert to the base Pro rates.
> [!Note]
> For the latest DeepSeek-V4-Pro pricing, including the current 75% discount valid until 15:59 UTC on 31 May 2026, please consult the official DeepSeek pricing page. All rates listed in the README correspond to the officially published values.
Publishing Your Own Skill
DeepSeek TUI discovers skills from workspace directories (.agents/skills → skills → .opencode/skills → .claude/skills → .cursor/skills) and global directories (~/.agents/skills → ~/.claude/skills → ~/.deepseek/skills). Each skill is a directory with a SKILL.md file:
~/.agents/skills/my-skill/
└── SKILL.md
Frontmatter required:
---
name: my-skill
description: Use this when DeepSeek should follow my custom workflow.
---
# My Skill
Instructions for the agent go here.
Commands: /skills (list), /skill (activate), /skill new (scaffold), /skill install github:/ (community), /skill update / uninstall / trust. Community installs from GitHub require no backend service. Installed skills appear in the model-visible session context; the agent can auto-select relevant skills via the load_skill tool when your task matches their descriptions.
First launch also installs bundled system skills for common workflows:
skill-creator, delegate, v4-best-practices, plugin-creator,
skill-installer, mcp-builder, documents, presentations,
spreadsheets, pdf, and feishu. These live under
~/.deepseek/skills and are versioned so new bundles are added on upgrade
without recreating skills the user deliberately deleted.
dfwqdyl-ui — model ID case-sensitivity compatibility report (#729)
Oliver-ZPLiu — stale working... state bug report, Windows clipboard fallback, MCP Streamable HTTP session fixes, and Homebrew tap automation (#738, #850, #1643, #1631)
reidliu41 — resume hint, workspace trust persistence, Ollama provider support, thinking-block stream finalization, CI cache hardening, streaming wrap, and DeepSeek model completions (#863, #870, #921, #1078, #1603, #1628, #1601)