Introduction What is Context Engine?

What is Context Engine?

Your codebase, instantly understood by any AI

Context Engine gives your AI a complete, always-current understanding of your codebase — across every file, symbol, commit, and repository — so you can stop re-explaining things and start shipping faster.

The Problem

Every time you open a new chat with an AI assistant, you start from zero. You paste files, re-explain architecture, describe relationships between modules, and hope the model understands enough context to give a useful answer. On large codebases, this breaks down fast.

Context Engine solves this by indexing your entire codebase into a hybrid semantic + lexical search layer with a symbol graph and persistent memory. Your AI tools query this index directly via the Model Context Protocol (MCP), getting precise, grounded answers without requiring you to feed it context manually.

How It Works

Your Codebase
(files, git history, dependencies, docs)
        ↓
  Context Engine Index
  ┌─────────────────────────────────────┐
  │  Qdrant Vector Store             │
  │  Symbol Graph (call/import graph)│
  │  Memory Store (persistent notes) │
  └─────────────────────────────────────┘
        ↓
  MCP Tools (exposed to your AI client)
  repo_search · symbol_graph · context_answer
  memory_store · memory_find · cross_repo_search
        ↓
  Claude · Cursor · Windsurf · Copilot · Kiro
  (any MCP-compatible client)

Your code is indexed once — either automatically via the VS Code extension or manually via the CLI — and then every AI query is grounded in live, up-to-date context.

Core Capabilities

Semantic Code Search

repo_search finds code by meaning, not just keywords. Ask for "authentication token refresh logic" and get the right file and function, even if the code never uses those exact words.

Symbol Graph Navigation

symbol_graph lets your AI navigate callers, callees, definitions, importers, and inheritance chains. Ask who calls a function, what a class inherits from, or what would break if you change a method.

Natural Language Q&A

context_answer retrieves the relevant code spans and generates a cited explanation. Your AI answers questions like "how does the rate limiter decide when to reject requests?" with references to the actual implementation.

Cross-Repo Search

cross_repo_search traces data flow across repository boundaries — from a frontend API call to the backend handler that processes it — without switching contexts.

Persistent Memory

memory_store and memory_find let your AI remember architectural decisions, known gotchas, and team conventions across sessions. Knowledge accumulates rather than being lost at the end of each conversation.

By the Numbers

Search latency < 100ms
Language support 32 languages
IDE integrations 16 integrations
Compute efficiency 80% less compute per query vs. naive RAG

Supported AI Clients

Context Engine works with any tool that supports the Model Context Protocol (MCP):

  • Claude (claude.ai, Claude Code CLI)
  • Cursor
  • Windsurf
  • GitHub Copilot
  • Kiro
  • Any other MCP-compatible client

Deployment Options

SaaS — The fastest way to get started. Install the VS Code extension and your workspace is indexed automatically. No infrastructure to manage.

Self-Hosted (Singular Mode) — Run the full stack on your own infrastructure. Your code never leaves your network. Ideal for teams with data sovereignty requirements or private codebases.

Enterprise — Dedicated infrastructure with advanced graph query support (Memgraph-backed), higher indexing throughput, and SLA guarantees.

See Plans & Tiers for a full comparison.

Next Steps

Next Plans & Tiers