Use 123API as the OpenAI-compatible provider in AnythingLLM for chat, knowledge base and retrieval workflows.
GET /v1/modelsAfter launching AnythingLLM:
Choose Generic OpenAI or the OpenAI-compatible provider option and fill in:
| Field | Value |
|---|---|
| Provider | OpenAI Compatible or OpenAI |
| Base URL | https://123api.co/v1 |
| API Key | Your 123API key |
| Chat Model | Example: gpt-4o, gpt-5, claude-sonnet |
The Base URL should include /v1 in AnythingLLM. If you omit it, requests usually fail or return a provider error.
If you plan to use document retrieval or a private knowledge base, also configure an embedding model:
| Field | Value |
|---|---|
| Embedding Provider | Generic OpenAI |
| Base URL | https://123api.co/v1 |
| API Key | Your 123API key |
| Model | text-embedding-3-small or another supported embedding model |
gpt-4o-mini for low-cost testing and simple assistant flowsgpt-4o for higher-quality daily conversationsgpt-5 for harder reasoning or task planningclaude-sonnet for long context and analysis-heavy workflowsCheck whether the Base URL includes /v1, whether the API key is valid and whether your deployment environment can reach https://123api.co.
Recheck your embedding model, reduce document noise, improve chunk quality and test with smaller structured documents before importing a large corpus.
Query GET /v1/models and use the exact model ID returned by your current account and environment.