CS

Cherry Studio Integration Guide

This guide will help you integrate GPT-Load proxy service with Cherry Studio AI client, supporting complete configuration for OpenAI, Gemini, Gemini OpenAI compatible, and Anthropic channel types.

Prerequisites

Ensure you have successfully deployed and started the GPT-Load service, running on http://localhost:3001 by default

General Setup Steps

Service Setup Location

Access Settings Page

Click "Settings" → "Model Services" in the bottom left corner of Cherry Studio

Add New Service

Click the "Add" button at the bottom of the page (do not use existing services from the list)

Get Models After Configuration

Click the "Manage" button to get the model list and select the desired models

Important Reminder

All channel types follow the same setup process: Select provider type → Configure API address and key → Get model list. Only the provider type and API address configuration differ.

1
OpenAI Channel Configuration

Step 1: Create Service

Creation Steps

Enter Service Name

Set an easily recognizable name for your service

Select Provider Type

OpenAI

Select OpenAI from the provider type dropdown

Cherry Studio OpenAI service creation screenshot

Click image to enlarge

Step 2: Configure Service

Configuration Parameters

Configure API Key

sk-123456

Use the proxy key configured in your GPT-Load

Set API Address

http://localhost:3001/proxy/openai

Where "openai" is the group name configured in your GPT-Load

Get Model List

After configuration, click the "Manage" button to get and select the required models

Cherry Studio OpenAI service configuration screenshot

Click image to enlarge

2
Gemini Channel Configuration

Step 1: Create Service

Creation Steps

Enter Service Name

Set an easily recognizable name for your Gemini service

Select Provider Type

Gemini

Select Gemini from the provider type dropdown

Cherry Studio Gemini service creation screenshot

Click image to enlarge

Step 2: Configure Service

Configuration Parameters

Configure API Key

sk-123456

Use the proxy key configured in your GPT-Load

Set API Address

http://localhost:3001/proxy/gemini

Where "gemini" is the group name configured in your GPT-Load

Get Model List

After configuration, click the "Manage" button to get and select the required models

Cherry Studio Gemini service configuration screenshot

Click image to enlarge

3
Gemini OpenAI Compatible Format

Step 1: Create Service

Using Gemini channel's OpenAI compatible interface format

Creation Steps

Enter Service Name

Set a name for your Gemini OpenAI compatible service

Select Provider Type

OpenAI

Use OpenAI type to support compatible interface

Cherry Studio Gemini OpenAI compatible service creation screenshot

Click image to enlarge

Step 2: Configure Service

Configuration Parameters

Configure API Key

sk-123456

Use the proxy key configured in your GPT-Load

Set API Address

http://localhost:3001/proxy/gemini/v1beta/openai/

Note: The address must end with "/" to prevent Cherry Studio from automatically adding v1 path

Get Model List

After configuration, click the "Manage" button to get and select the required models

Cherry Studio Gemini OpenAI compatible service configuration screenshot

Click image to enlarge

Critical Configuration Note

The API address must end with "/"! This is a Cherry Studio rule requirement to ensure v1 path is not automatically added, guaranteeing the compatible interface works properly.

4
Anthropic (Claude) Channel Configuration

Step 1: Create Service

Creation Steps

Enter Service Name

Set an easily recognizable name for your Anthropic service

Select Provider Type

Anthropic

Select Anthropic from the provider type dropdown

Cherry Studio Anthropic service creation screenshot

Click image to enlarge

Step 2: Configure Service

Configuration Parameters

Configure API Key

sk-123456

Use the proxy key configured in your GPT-Load

Set API Address

http://localhost:3001/proxy/anthropic

Where "anthropic" is the group name configured in your GPT-Load

Get Model List

After configuration, click the "Manage" button to get and select the required models

Cherry Studio Anthropic service configuration screenshot

Click image to enlarge

Important Notes

Configuration Considerations

Please replace the GPT-Load access address in the examples with your actual service address

The group names in the path (such as openai, gemini, anthropic) need to match your actual configuration in GPT-Load

Use your actual proxy key configured in GPT-Load, do not use the example sk-123456

After configuration, remember to select the newly added model in the chat interface for testing