AxonHub is an all-in-one AI development platform that provides unified API gateway, project management, and comprehensive development tools. It offers OpenAI, Anthropic, and AI SDK compatible API layers, transforming requests to various AI providers through a transformer pipeline architecture. The platform features comprehensive tracing capabilities, project-based organization, and integrated playground for rapid prototyping, helping developers and enterprises better manage AI development workflows.
Core Features:
1. Unified API: OpenAI- and Anthropic-compatible interface with automatic API translation lets you use one API format to access any supported model provider.
2. Tracing / Threads: Thread-aware tracing captures full request timelines for deep observability and faster debugging.
3. Fine-grained Permission: RBAC-based policies help teams govern access, usage, and data segregation precisely.
4. Adaptive Load Balancing: Intelligent multi-strategy load balancing automatically selects optimal AI channels based on health, performance, and session consistency.
AxonHub is an all-in-one AI development platform that provides unified API gateway, project management, and comprehensive development tools. It offers OpenAI, Anthropic, and AI SDK compatible API layers, transforming requests to various AI providers through a transformer pipeline architecture. The platform features comprehensive tracing capabilities, project-based organization, and integrated playground for rapid prototyping, helping developers and enterprises better manage AI development workflows.
Core Features
Unified API: OpenAI- and Anthropic-compatible interface with automatic API translation lets you use one API format to access any supported model provider.
Tracing / Threads: Thread-aware tracing captures full request timelines for deep observability and faster debugging.
Fine-grained Permission: RBAC-based policies help teams govern access, usage, and data segregation precisely.
Adaptive Load Balancing: Intelligent multi-strategy load balancing automatically selects optimal AI channels based on health, performance, and session consistency.
π Documentation
For detailed technical documentation, API references, architecture design, and more, please visit
AxonHub records every request as part of a thread-aware trace without requiring you to adopt any vendor-specific SDK. Bring your existing OpenAI-compatible client, and AxonHub will:
Require incoming AH-Trace-Id headers to stitch multiple requests into the same trace. If the header is omitted, AxonHub will still record the request but cannot automatically link it to related activity.
Link traces to threads so you can follow the entire conversation journey end to end
Capture model metadata, prompt / response spans, and timing information for fast root-cause analysis
Learn more about how tracing works and how to integrate it in the Tracing Guide.
π§ API Format Support
Format
Status
Compatibility
Modalities
OpenAI Chat Completions
β Done
Fully compatible
Text, Image
OpenAI Responses
β οΈ Partial
No previous_response_id
Text, Image
Anthropic Messages
β Done
Fully supported
Text
Gemini
β Done
Fully supported
Text, Image
AI SDK
β οΈ Partial
Partially supported
Text
Key Feature: Use OpenAI API to call Anthropic models, or Anthropic API to call OpenAI models - AxonHub handles automatic API translation!
# Extract and run
unzip axonhub_*.zip
cd axonhub_*
# Set environment variables
export AXONHUB_DB_DIALECT="tidb"
export AXONHUB_DB_DSN=".root:@tcp(gateway01.us-west-2.prod.aws.tidbcloud.com:4000)/axonhub?tls=true"
sudo ./install.sh
# Configuration file check
axonhub config check
# Start service
# For simplicity, we recommend managing AxonHub with the helper scripts:
# Start
./start.sh
# Stop
./stop.sh
π Usage Guide
Unified API Overview
AxonHub provides a unified API gateway that supports both OpenAI Chat Completions and Anthropic Messages APIs. This means you can:
Use OpenAI API to call Anthropic models - Keep using your OpenAI SDK while accessing Claude models
Use Anthropic API to call OpenAI models - Use Anthropic's native API format with GPT models
Use Gemini API to call OpenAI models - Use Gemini's native API format with GPT models
Automatic API translation - AxonHub handles format conversion automatically
Zero code changes - Your existing OpenAI or Anthropic client code continues to work
1. Initial Setup
Access Management Interface
http://localhost:8090
Configure AI Providers
Add API keys in the management interface
Test connections to ensure correct configuration
Create Users and Roles
Set up permission management
Assign appropriate access permissions
2. Channel Configuration
Configure AI provider channels in the management interface. For detailed information on channel configuration, including model mappings, parameter overrides, and troubleshooting, see the Channel Configuration Guide.
3. Model Management
AxonHub provides a flexible model management system that supports mapping abstract models to specific channels and model implementations through Model Associations. This enables:
Unified Model Interface - Use abstract model IDs (e.g., gpt-4, claude-3-opus) instead of channel-specific names
Intelligent Channel Selection - Automatically route requests to optimal channels based on association rules and load balancing
Flexible Mapping Strategies - Support for precise channel-model matching, regex patterns, and tag-based selection
Priority-based Fallback - Configure multiple associations with priorities for automatic failover
For comprehensive information on model management, including association types, configuration examples, and best practices, see the Model Management Guide.
4. Create API Keys
Create API keys to authenticate your applications with AxonHub. Each API key can be configured with multiple profiles that define:
Model Mappings - Transform user-requested models to actual available models using exact match or regex patterns
Channel Restrictions - Limit which channels an API key can use by channel IDs or tags
Model Access Control - Control which models are accessible through a specific profile
Profile Switching - Change behavior on-the-fly by activating different profiles
For detailed information on API key profiles, including configuration examples, validation rules, and best practices, see the API Key Profile Guide.
5. Claude Code/Codex Integration
See the dedicated guides for detailed setup steps, troubleshooting, and tips on combining these tools with AxonHub model profiles: