GitHub

One API

What is the project about?

One API is an open-source project that allows users to access various large language models (LLMs) through a standardized OpenAI API format.

What problem does it solve?

It simplifies the integration of multiple LLMs from different providers by providing a unified interface. It eliminates the need to adapt to different APIs for each model, making it easier to switch between models or use multiple models simultaneously. It also provides management features that are not always available from the original providers.

What are the features of the project?

  • Support for Multiple LLMs: OpenAI (including Azure), Anthropic Claude (including AWS), Google PaLM2/Gemini, Mistral, and many others (Doubao, Baidu ERNIE, Alibaba Qwen, iFlytek Spark, Zhipu ChatGLM, 360 GLM, Tencent Hunyuan, Moonshot, Baichuan, MINIMAX, Groq, Ollama, LingyiWanwu, StepFun, Coze, Cohere, DeepSeek, Cloudflare Workers AI, DeepL, together.ai, novita.ai, SiliconCloud, xAI).
  • Proxy Configuration: Supports configuring mirrors and various third-party proxy services.
  • Load Balancing: Distributes requests across multiple channels.
  • Stream Mode: Supports streaming responses for a typewriter effect.
  • Multi-node Deployment: Can be deployed across multiple servers.
  • Token Management: Allows setting expiration, quotas, IP ranges, and model access for tokens.
  • Redemption Code Management: Supports generating and exporting redemption codes for account top-ups.
  • Channel Management: Batch creation of channels.
  • User and Channel Grouping: Allows setting different rates for different groups.
  • Channel Model List: Set a list of models for each channel.
  • Quota Details: View detailed quota usage.
  • User Invitation Rewards: Rewards for inviting new users.
  • USD Display: Option to display quotas in USD.
  • Announcements and Customization: Supports announcements, top-up links, initial quotas for new users, custom system name, logo, footer, and custom home/about pages (HTML, Markdown, or iframe).
  • Model Mapping: Redirect user's request model.
  • Automatic Retry: Automatic retry on failure.
  • Drawing Interface: Supports image generation APIs.
  • Cloudflare AI Gateway Support: Integrates with Cloudflare AI Gateway.
  • Management API: Provides a management API for extending functionality without modifying the core code.
  • User Authentication: Supports multiple login/registration methods (email, Feishu, GitHub, WeChat).
  • Theme Switching: Allows switching between different themes.
  • Message Pusher Integration: Sends alerts to various apps via Message Pusher.

What are the technologies used in the project?

  • Go: The backend is primarily written in Go.
  • Docker: Used for containerization and deployment.
  • SQL/SQLite/MySQL/PostgreSQL: Database options for storing data.
  • Redis (Optional): Used for caching.
  • JavaScript/Frontend Framework (implied): Used for the web interface.
  • Nginx (Recommended): Used as a reverse proxy.

What are the benefits of the project?

  • Simplified Integration: Easy access to multiple LLMs through a single API.
  • Flexibility: Switch between models or use multiple models easily.
  • Cost Management: Control spending with quotas and rates.
  • Scalability: Supports multi-node deployment for high availability.
  • Customization: Tailor the system to specific needs.
  • Centralized Management: Manage users, tokens, channels, and quotas in one place.

What are the use cases of the project?

  • Building AI-powered applications: Developers can use One API to integrate LLMs into their applications.
  • Creating chatbots: Build chatbots that leverage different LLMs.
  • Managing LLM access: Organizations can use One API to control and monitor LLM usage.
  • Research and experimentation: Researchers can easily compare and test different LLMs.
  • Providing LLM services: Users can create their own LLM service using One API.
  • Proxying and managing access to paid LLM services.
one-api screenshot