Gonzo is a Go-based TUI (Terminal User Interface) tool designed for real-time log analysis and visualization. It provides developers, DevOps engineers, and system administrators with an efficient way to monitor, analyze, and understand log data from various sources.
Key Features:
Real-time log processing from files, stdin, or network streams.
AI-powered insights using OpenAI, Ollama, or local models for anomaly detection and pattern analysis.
Customizable themes with support for light/dark modes and 11+ built-in skins.
Advanced filtering capabilities including regex, attribute search, and severity-based filtering.
Interactive dashboard with real-time charts, heatmap visualizations, and keyboard/mouse navigation.
Support for OpenTelemetry Protocol (OTLP) via gRPC and HTTP receivers.
Audience & Benefit:
Ideal for developers and operations teams who need to analyze log streams in real time. Gonzo helps identify patterns, anomalies, and trends quickly, enabling faster troubleshooting and decision-making. Its support for custom log formats and AI-driven insights makes it a versatile tool for handling complex logging environments.
README
Gonzo - The Go based TUI for log analysis
A powerful, real-time log analysis terminal UI inspired by k9s. Analyze log streams with beautiful charts, AI-powered insights, and advanced filtering - all from your terminal.
See it in action
Main Dashboard
Stats and Info
Everyone loves a heatmap
โจ Features
๐ฏ Real-Time Analysis
Live streaming - Process logs as they arrive from stdin, files, or network
OTLP native - First-class support for OpenTelemetry log format
OTLP receiver - Built-in gRPC server to receive logs via OpenTelemetry protocol
Format detection - Automatically detects JSON, logfmt, and plain text
Custom formats - Define your own log formats with YAML configuration
Severity tracking - Color-coded severity levels with distribution charts
๐ Interactive Dashboard
k9s-inspired layout - Familiar 2x2 grid interface
Real-time charts - Word frequency, attributes, severity distribution, and time series
Keyboard + mouse navigation - Vim-style shortcuts plus click-to-navigate and scroll wheel support
Smart log viewer - Auto-scroll with intelligent pause/resume behavior
Anomaly analysis - Spot unusual patterns in your logs
Root cause suggestions - Get AI-powered debugging assistance
Configurable models - Choose from GPT-4, GPT-3.5, or any custom model
Multiple providers - Works with OpenAI, LM Studio, Ollama, or any OpenAI-compatible API
Local AI support - Run completely offline with local models
๐ Quick Start
Installation
Using Go
go install github.com/control-theory/gonzo/cmd/gonzo@latest
Using Homebrew (macOS/Linux)
brew install gonzo
Download Binary
Download the latest release for your platform from the releases page.
Using Nix package manager (beta support)
nix run github:control-theory/gonzo
Build from Source
git clone https://github.com/control-theory/gonzo.git
cd gonzo
make build
๐ Usage
Basic Usage
# Read logs directly from files
gonzo -f application.log
# Read from multiple files
gonzo -f application.log -f error.log -f debug.log
# Use glob patterns to read multiple files
gonzo -f "/var/log/*.log"
gonzo -f "/var/log/app/*.log" -f "/var/log/nginx/*.log"
# Follow log files in real-time (like tail -f)
gonzo -f /var/log/app.log --follow
gonzo -f "/var/log/*.log" --follow
# Analyze logs from stdin (traditional way)
cat application.log | gonzo
# Stream logs from kubectl
kubectl logs -f deployment/my-app | gonzo
# Follow system logs
tail -f /var/log/syslog | gonzo
# Analyze Docker container logs
docker logs -f my-container 2>&1 | gonzo
# With AI analysis (requires API key)
export OPENAI_API_KEY=sk-your-key-here
gonzo -f application.log --ai-model="gpt-4"
Custom Log Formats
Gonzo supports custom log formats through YAML configuration files. This allows you to parse any structured log format without modifying the source code.
Some example custom formats are included in the repo, simply download, copy, or modify as you like!
In order for the commands below to work, you must first download them and put them in the Gonzo config directory.
# Use a built-in custom format
gonzo --format=loki-stream -f loki_logs.json
# List available custom formats
ls ~/.config/gonzo/formats/
# Use your own custom format
gonzo --format=my-custom-format -f custom_logs.txt
Custom formats support:
Flexible field mapping - Map any JSON/text fields to timestamp, severity, body, and attributes
Auto-mapping - Automatically extract all unmapped fields as attributes
Nested field extraction - Extract fields from deeply nested JSON structures
Pattern-based parsing - Use regex patterns for unstructured text logs
For detailed information on creating custom formats, see the Custom Formats Guide.
OTLP Network Receiver
Gonzo can receive logs directly via OpenTelemetry Protocol (OTLP) over both gRPC and HTTP:
# Start Gonzo as an OTLP receiver (both gRPC on port 4317 and HTTP on port 4318)
gonzo --otlp-enabled
# Use custom ports
gonzo --otlp-enabled --otlp-grpc-port=5317 --otlp-http-port=5318
# gRPC endpoint: localhost:4317
# HTTP endpoint: http://localhost:4318/v1/logs
from opentelemetry.exporter.otlp.proto.grpc._log_exporter import OTLPLogExporter
exporter = OTLPLogExporter(
endpoint="localhost:4317",
insecure=True
)
Using HTTP:
from opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter
exporter = OTLPLogExporter(
endpoint="http://localhost:4318/v1/logs",
)
See examples/send_otlp_logs.py for a complete example.
With AI Analysis
# Auto-select best available model (recommended) - file input
export OPENAI_API_KEY=sk-your-key-here
gonzo -f logs.json
# Or specify a particular model - file input
export OPENAI_API_KEY=sk-your-key-here
gonzo -f logs.json --ai-model="gpt-4"
# Follow logs with AI analysis
export OPENAI_API_KEY=sk-your-key-here
gonzo -f "/var/log/app.log" --follow --ai-model="gpt-4"
# Using local LM Studio (auto-selects first available)
export OPENAI_API_KEY="local-key"
export OPENAI_API_BASE="http://localhost:1234/v1"
gonzo -f logs.json
# Using Ollama (auto-selects best model like gpt-oss:20b)
export OPENAI_API_KEY="ollama"
export OPENAI_API_BASE="http://localhost:11434"
gonzo -f logs.json --follow
# Traditional stdin approach still works
export OPENAI_API_KEY=sk-your-key-here
cat logs.json | gonzo --ai-model="gpt-4"
Keyboard Shortcuts
Navigation
Key/Mouse
Action
Tab / Shift+Tab
Navigate between panels
Mouse Click
Click on any section to switch to it
โ/โ or k/j
Move selection up/down
Mouse Wheel
Scroll up/down to navigate selections
โ/โ or h/l
Horizontal navigation
Enter
View log details or open analysis modal (Counts section)
ESC
Close modal/cancel
Actions
Key
Action
Space
Pause/unpause entire dashboard
/
Enter filter mode (regex supported)
s
Search and highlight text in logs
Ctrl+f
Open severity filter modal
f
Open fullscreen log viewer modal
c
Toggle Host/Service columns in log view
r
Reset all data (manual reset)
u / U
Cycle update intervals (forward/backward)
i
AI analysis (in detail view)
m
Switch AI model (shows available models)
? / h
Show help
q / Ctrl+C
Quit
Log Viewer Navigation
Key
Action
Home
Jump to top of log buffer (stops auto-scroll)
End
Jump to latest logs (resumes auto-scroll)
PgUp / PgDn
Navigate by pages (10 entries at a time)
โ/โ or k/j
Navigate entries with smart auto-scroll
AI Chat (in log detail modal)
Key
Action
c
Start chat with AI about current log
Tab
Switch between log details and chat pane
m
Switch AI model (works in modal too)
Severity Filter Modal
The severity filter modal (Ctrl+f) provides fine-grained control over which log levels to display:
Key
Action
โ/โ or k/j
Navigate severity options
Space
Toggle selected severity level on/off
Enter
Apply filter and close modal (or select All/None)
ESC
Cancel changes and close modal
Features:
Select All - Quick option to enable all severity levels (Enter to apply and close)
Select None - Quick option to disable all severity levels (Enter to apply and close)
Color-coded display - Each severity level shows in its standard color
Real-time count - Header shows how many levels are currently active
Persistent filtering - Applied filters remain active until changed
Quick shortcuts - Press Enter on Select All/None to apply immediately
Log Counts Analysis Modal
Press Enter on the Counts section to open a comprehensive analysis modal featuring:
๐ฅ Real-Time Heatmap Visualization
Time-series heatmap showing severity levels vs. time (1-minute resolution)
60-minute rolling window with automatic scaling per severity level
Color-coded intensity using ASCII characters (โโโโ) with gradient effects
Precise alignment with time headers showing minutes ago (60, 50, 40, ..., 10, 0)
Receive time architecture - visualization based on when logs were received for reliable display
๐ Pattern Analysis by Severity
Top 3 patterns per severity using drain3 pattern extraction algorithm
Severity-specific tracking with dedicated drain3 instances for each level
Real-time pattern detection as logs arrive and are processed
Accurate pattern counts maintained separately for each severity level
๐ข Service Distribution Analysis
Top 3 services per severity showing which services generate each log level
Service name extraction from common attributes (service.name, service, app, etc.)
Real-time updates as new logs are processed and analyzed
Fallback to host information when service names are not available
โจ๏ธ Modal Navigation
Scrollable content using mouse wheel or arrow keys
ESC to close and return to main dashboard
Full-width display maximizing screen real estate for data visualization
Real-time updates - data refreshes automatically as new logs arrive
The modal uses the same receive time architecture as the main dashboard, ensuring consistent and reliable visualization regardless of log timestamp accuracy or clock skew issues.
โ๏ธ Configuration
Command Line Options
gonzo [flags]
gonzo [command]
Commands:
version Print version information
help Help about any command
completion Generate shell autocompletion
Flags:
-f, --file stringArray Files or file globs to read logs from (can specify multiple)
--follow Follow log files like 'tail -f' (watch for new lines in real-time)
--format string Log format to use (auto-detect if not specified). Can be: otlp, json, text, or a custom format name
-u, --update-interval duration Dashboard update interval (default: 1s)
-b, --log-buffer int Maximum log entries to keep (default: 1000)
-m, --memory-size int Maximum frequency entries (default: 10000)
--ai-model string AI model for analysis (auto-selects best available if not specified)
-s, --skin string Color scheme/skin to use (default, or name of a skin file)
--stop-words strings Additional stop words to filter out from analysis (adds to built-in list)
-t, --test-mode Run without TTY for testing
-v, --version Print version information
--config string Config file (default: $HOME/.config/gonzo/config.yml)
-h, --help Show help message
Configuration File
Create ~/.config/gonzo/config.yml for persistent settings:
See examples/config.yml for a complete configuration example with detailed comments.
AI Configuration
Gonzo supports multiple AI providers for intelligent log analysis. Configure using command line flags and environment variables. You can switch between available models at runtime using the m key.
OpenAI
# Set your API key
export OPENAI_API_KEY="sk-your-actual-key-here"
# Auto-select best available model (recommended)
cat logs.json | gonzo
# Or specify a particular model
cat logs.json | gonzo --ai-model="gpt-4"
LM Studio (Local AI)
# 1. Start LM Studio server with a model loaded
# 2. Set environment variables (IMPORTANT: include /v1 in URL)
export OPENAI_API_KEY="local-key"
export OPENAI_API_BASE="http://localhost:1234/v1"
# Auto-select first available model (recommended)
cat logs.json | gonzo
# Or specify the exact model name from LM Studio
cat logs.json | gonzo --ai-model="openai/gpt-oss-120b"
Ollama (Local AI)
# 1. Start Ollama: ollama serve
# 2. Pull a model: ollama pull gpt-oss:20b
# 3. Set environment variables (note: no /v1 suffix needed)
export OPENAI_API_KEY="ollama"
export OPENAI_API_BASE="http://localhost:11434"
# Auto-select best model (prefers gpt-oss, llama3, mistral, etc.)
cat logs.json | gonzo
# Or specify a particular model
cat logs.json | gonzo --ai-model="gpt-oss:20b"
cat logs.json | gonzo --ai-model="llama3"
Custom OpenAI-Compatible APIs
# For any OpenAI-compatible API endpoint
export OPENAI_API_KEY="your-api-key"
export OPENAI_API_BASE="https://api.your-provider.com/v1"
cat logs.json | gonzo --ai-model="your-model-name"
Runtime Model Switching
Once Gonzo is running, you can switch between available AI models without restarting:
Press m anywhere in the interface to open the model selection modal
Navigate with arrow keys, page up/down, or mouse wheel
Select a model with Enter
Cancel with Escape
The model selection modal shows:
All available models from your configured AI provider
Current active model (highlighted in green)
Dynamic sizing based on terminal height
Scroll indicators when there are many models
Note: Model switching requires the AI service to be properly configured and running. The modal will only appear if models are available from your AI provider.
Auto Model Selection
When you don't specify the --ai-model flag, Gonzo automatically selects the best available model:
Selection Priority:
OpenAI: Prefers gpt-4 โ gpt-3.5-turbo โ first available
Ollama: Prefers gpt-oss:20b โ llama3 โ mistral โ codellama โ first available
LM Studio: Uses first available model from the server
Other providers: Uses first available model
Benefits:
โ No need to know model names beforehand
โ Works immediately with any AI provider
โ Intelligent defaults for better performance
โ Still allows manual model selection with m key
Example: Instead of gonzo --ai-model="llama3", simply run gonzo and it will auto-select llama3 if available.
Troubleshooting AI Setup
LM Studio Issues:
โ Ensure server is running and model is loaded
โ Use full model name: --ai-model="openai/model-name"
โ Include /v1 in base URL: http://localhost:1234/v1
โ Check available models: curl http://localhost:1234/v1/models
Ollama Issues:
โ Start server: ollama serve
โ Verify model: ollama list
โ Test API: curl http://localhost:11434/api/tags
โ Use correct URL: http://localhost:11434 (no /v1 suffix)
โ Model names include tags: gpt-oss:20b, llama3:8b
OpenAI Issues:
โ Verify API key is valid and has credits
โ Check model availability (gpt-4 requires API access)
Environment Variables
Variable
Description
OPENAI_API_KEY
API key for AI analysis (required for AI features)
OPENAI_API_BASE
Custom API endpoint (default: )
GONZO_FILES
Comma-separated list of files/globs to read (equivalent to -f flags)
GONZO_FOLLOW
Enable follow mode (true/false)
GONZO_UPDATE_INTERVAL
Override update interval
GONZO_LOG_BUFFER
Override log buffer size
GONZO_MEMORY_SIZE
Override memory size
GONZO_AI_MODEL
Override default AI model
GONZO_TEST_MODE
Enable test mode
NO_COLOR
Disable colored output
Shell Completion
Enable shell completion for better CLI experience:
> โ ๏ธ NOTE: on macOS although it is not required, defining XDG_CONFIG_HOME=~/.config is recommended in order to maintain consistency with Linux configuration practices.
cmd/gonzo/ # Main application entry
internal/
โโโ tui/ # Terminal UI implementation
โโโ analyzer/ # Log analysis engine
โโโ memory/ # Frequency tracking
โโโ otlplog/ # OTLP format handling
โโโ ai/ # AI integration
๐งช Development
Prerequisites
Go 1.21 or higher
Make (optional, for convenience)
Building
# Quick build
make build
# Run tests
make test
# Build for all platforms
make cross-build
# Development mode (format, vet, test, build)
make dev
Testing
# Run unit tests
make test
# Run with race detection
make test-race
# Integration tests
make test-integration
# Test with sample data
make demo
๐จ Customization & Themes
Gonzo supports beautiful, customizable color schemes to match your terminal environment and personal preferences.
Using Built-in Themes
Be sure you download and place in the Gonzo config directory so Gonzo can find them.
# Use a dark theme
gonzo --skin=dracula
gonzo --skin=nord
gonzo --skin=monokai
# Use a light theme
gonzo --skin=github-light
gonzo --skin=solarized-light
gonzo --skin=vs-code-light
# Use Control Theory branded themes
gonzo --skin=controltheory-light # Light theme
gonzo --skin=controltheory-dark # Dark theme
Available Themes
Dark Themes ๐: default, controltheory-dark, dracula, gruvbox, monokai, nord, solarized-dark
Light Themes โ๏ธ: controltheory-light, github-light, solarized-light, vs-code-light, spring