
GPT-Load
High-Performance AI API Transparent Proxy
Enterprise-grade AI API proxy service developed with Go 1.23+, supporting multiple AI providers like OpenAI, Google Gemini, Anthropic Claude. Features intelligent key management, load balancing, high concurrency handling, and comprehensive monitoring.
docker run -d --name gpt-load \
-p 3001:3001 \
-e AUTH_KEY=your-secure-key-here \
-v "$(pwd)/data":/app/data \
ghcr.io/tbphp/gpt-load:latest
# 访问管理界面:http://localhost:3001
System Architecture
GPT-Load adopts a three-tier architecture design to provide high-performance, high-availability AI API proxy services
Data Flow Architecture
Client Applications
Web/Mobile apps call through standard OpenAI API format
GPT-Load Proxy Layer
Core proxy service responsible for request forwarding and management
AI Service Providers
Unified access to multiple AI services
Infrastructure Components
MySQL 8.2+
Persistent Storage
Redis
Cache & Locks
Management Interface
Web Control Panel
Flexible Deployment Options
Standalone Deployment
- • Docker Compose one-click startup
- • Includes complete MySQL + Redis
- • Suitable for development and small production
Cluster Deployment
- • Master/Slave architecture
- • Horizontal scaling support
- • High availability guarantee
Start GPT-Load in 3 Steps
Quick deployment via Docker Compose, including complete database and cache services
1. Clone Project
Download complete project code from GitHub
git clone https://github.com/tbphp/gpt-load.git
cd gpt-load2. Configure Environment
Copy and edit environment configuration file
# Copy environment configuration file
cp .env.example .env
# Edit configuration (optional)
# vim .env
# Main configuration items:
# APP_PORT=3001
# APP_SECRET=your-secret-key3. Start Services
Use Docker Compose for one-click startup
# Start services (including MySQL and Redis)
docker compose up -d
# Access admin interface
# http://localhost:3001System Requirements
Start Using GPT-Load Now
Deploy in minutes and enjoy high-performance AI API proxy services