Developer Guide Contents
Jump to any section of this guide.
API Overview
LINUS-AI exposes an OpenAI-compatible REST API. Any application built against the OpenAI API can point to LINUS-AI with a single base URL change.
Base URL
http://localhost:8080 — configurable via server.host and server.port in config.toml.
OpenAI-Compatible
Drop-in replacement for the OpenAI API. Change only the base URL and API key — no other code changes required.
Authentication
When auth is enabled, include the header: Authorization: Bearer YOUR_API_KEY. Configure keys in config.toml.
Streaming
Server-Sent Events (text/event-stream) for all streaming endpoints. Set "stream": true in your request.
CORS
Configurable in config.toml via server.cors. Default: ["*"] for local development.
Content-Type
All requests must use Content-Type: application/json. Streaming responses use text/event-stream.
All Endpoints at a Glance
Compliance & Security API
RAG Document Access Control API
Chat Completions
The primary inference endpoint. OpenAI-compatible with LINUS-AI extensions for profile selection and custom system prompts.
Request Body Parameters
Non-Streaming Response
Streaming Response (SSE)
Code Examples
Embeddings
Generate vector embeddings for semantic search, RAG pipelines, clustering, and similarity tasks — all computed locally.
Request Parameters
/v1/vault/store to build a
fully private semantic memory store.
Models API
List, download, inspect, and remove local models programmatically.
Agent / Agentic Stream
LINUS-AI's native agentic inference endpoints. Supports multi-turn reasoning, tool use, 14 vertical profiles, and an encrypted semantic vault.
POST /agent/stream — Request Parameters
SSE Event Types
GET /agent/profiles — Response
WebSocket Streaming
For real-time bidirectional communication — ideal for chat UIs and applications that need low-latency token streaming without managing SSE connections.
Message Protocol
Python SDK
LINUS-AI uses the standard openai Python library for all
OpenAI-compatible endpoints, plus direct HTTP for LINUS-AI extensions.
Node.js / JavaScript SDK
Use the official openai npm package for OpenAI-compatible
endpoints, and fetch for LINUS-AI extensions.
Plugin System
Extend the LINUS-AI agent with custom tools by dropping Python modules into the plugin directory. No server restart required — plugins are hot-loaded.
Plugin Directory
Place plugin packages in ~/.linus_ai/plugins/. Each plugin is a directory containing plugin.json and a Python module.
Entry Point
Every plugin must expose a register(app) function in its main module. Called at load time with the LINUS-AI app context.
Tool Registration
Use @tool decorator to register functions as agent tools. The agent discovers and invokes them during agentic inference.
Plugin Manifest
Each plugin declares its identity and tools in plugin.json: name, version, description, author, and tools array.
Plugin Manifest — plugin.json
Complete Plugin Example — Weather Tool
from linus_ai.plugin import tool, register is available
in any plugin. Tools are automatically discovered by the agent and included in its reasoning context.
No server restart is required — LINUS-AI hot-loads plugins from the plugins directory on startup and
when a SIGHUP is received.
Webhook Events
Configure LINUS-AI to push structured event notifications to any HTTP endpoint. Useful for monitoring, audit pipelines, and workflow automation.
Configuration
Event Types
Payload Format
Signature Verification
Rate Limits & Error Codes
HTTP status codes, error response format, rate-limit headers, and a reference table of all common error codes.
HTTP Status Codes
Error Response Format
Rate-Limit Headers
Common Error Codes
Local Development
Set up a development environment to contribute to LINUS-AI, run tests, or build custom inference backends.
Running Tests
Python: pytest tests/ -v
Rust: cargo test
Integration: pytest tests/integration/ --live (requires a running server).
Linting & Formatting
Python: ruff check . and ruff format .
Rust: cargo clippy and cargo fmt
Contributing
Read CONTRIBUTING.md before submitting a PR. We welcome bug fixes, new model support, and documentation improvements.
Branch Conventions
Features: feat/short-description
Bug fixes: fix/short-description
Docs: docs/short-description