Embedded LLM Removal¶
Embedded LLM support has been removed from k13d.
Why It Was Removed¶
- The response quality was too limited compared with other supported providers
- It added maintenance cost across the CLI, Web UI, docs, and packaging
- Ollama provides a better local/self-hosted path with clearer operational boundaries
What To Use Instead¶
Local / Private Inference¶
Remote Providers¶
Use any supported remote provider such as OpenAI, Anthropic, Gemini, Solar, Azure OpenAI, or Bedrock.
Migration¶
If an old config still contains:
change it to an active provider such as: