Provider Configuration
aCMF supports separate OpenAI-compatible providers for:- Adjudicator
- Context Enhancer
- Cortex
- Embeddings
Provider kinds
openai_compatiblestub
stub is useful for tests and local development. openai_compatible expects a standard OpenAI-style API with base_url, API key, and model name.
For a complete field-by-field environment reference, including which variables are required and when, see Environment Reference.
Recommended models
| Role | Recommended models |
|---|---|
| Adjudicator | openai/gpt-5.4 or google/gemini-3.1-flash-lite-preview |
| Context Enhancer | openai/gpt-5.4, google/gemini-3.1-flash-lite-preview, or inception/mercury-2 for fast token speed |
| Cortex | openai/gpt-5.4 or google/gemini-3.1-flash-lite-preview |
| Embedding | openai/text-embedding-3-large or openai/text-embedding-3-small |
- use
openai/gpt-5.4for adjudicator, context enhancer, and cortex if you want one consistent default - use
inception/mercury-2only for the Context Enhancer if you want faster token speed on read-time synthesis - use
openai/text-embedding-3-largewhen retrieval quality matters most, oropenai/text-embedding-3-smallwhen cost and vector size matter more
Resolution rules
- Each role first reads its own
PROVIDER,BASE_URL,API_KEY,MODEL, andTIMEOUT_SECONDS. - If a role does not define
BASE_URL,API_KEY, orTIMEOUT_SECONDS, aCMF falls back to the global OpenAI-compatible defaults. - If a role is set to
stub, network credentials are not required for that role.