Skip to Content

Using 123API with Gemini CLI

Some command-line AI tools allow custom OpenAI-compatible endpoints. In those cases, point the tool to 123API and use your 123API API key.

Prerequisites

  1. Install Node.js and npm
  2. Install the Gemini CLI tool if your workflow depends on it
  3. Create a 123API API key

Typical Configuration

FieldValue
Base URLhttps://123api.co/v1
API KeyYour 123API key
ModelExample: gemini-2.0-flash, gpt-4o, claude-sonnet

Environment Variables

For shell-based workflows, export your values before use:

export GEMINI_API_KEY="your-123api-key" export GEMINI_BASE_URL="https://123api.co/v1"
  1. Export the key into your local shell environment
  2. Configure the custom endpoint in the CLI tool
  3. Run a small test prompt first
  4. Save the exact working model ID for reuse in scripts

Example

gemini chat --model gemini-2.0-flash

FAQ

Gemini CLI still uses the official endpoint+

Re-open your shell session and confirm GEMINI_BASE_URL is really exported before running the command.

A Gemini model name does not work+

Use the exact model ID returned by GET /v1/models in your current environment.

View Text API docs

Review the available text API formats before wiring them into CLI tools.