Features
Voidleap gives you the deepest scaffolding in the market, runs locally on your machine, and shows you everything it does. Nineteen sections that go deeper than the hero.
Build Your Own
The deepest customization layer in any AI coding tool — visual when you want, frontmatter when you don't.
Voidleap is a workshop, not a catalog. Every agent, skill, hook, MCP server, slash command, keybinding, security rule, and automation that ships with the app is a starting point you can copy, edit, and make your own. The visual editors are for when that's faster. The raw-frontmatter toggle is for when you want full control.
Custom agents have richer frontmatter than anything else available. Per-agent model and provider. Per-agent effort and thinking budget. Allowed and denied tools, MCP, skills, and subagents. Path access rules with paths.allow and paths.deny. Context isolation mode (none, conversation, fork). Input mode (default, read-only, careful, yolo). Custom system instructions. Provider-specific variants like my-agent.anthropic.md and my-agent.openai.md that resolve automatically so you can tune prompts per model.
Custom skills bundle instructions and resources into reusable units. Inline or fork execution. Tool-allowlist restrictions. Bind to a slash command (/my-skill). Optional model and agent overrides. Bundle scripts, references, and assets in subfolders.
Custom hooks wire to events — file change, tool call, agent start or complete — with shell or skill handlers. Custom MCP servers configure HTTP/JSON-RPC, env-substituted headers, allow/deny tool lists, and auth flows. Custom slash commands bind any skill to /your-command. Custom keybindings come with conflict detection. Custom security rules and quick actions live alongside everything else.
Three-scope inheritance runs through all of it: author once globally, override per workspace, refine per project. The Library shows you the merged result and which scope owns each value, so you never wonder why a setting behaves the way it does.
Privacy & Local-First
Your code stays on your disk. Run entirely offline if you choose.
Privacy isn't a setting in Voidleap. It's the architecture. Every piece of state the app generates lives on your machine. The code index is a local SQLite database at ~/.voidleap/code-index. Logs, memory, plugin cache, configuration, threads, artifacts, worktrees, checkpoints — all local, all in your filesystem.
When you use a cloud provider, prompts and credentials go directly to the provider you picked. No data flows through Voidleap servers. There is no Voidleap-hosted billing, no markup, no abstraction layer. Bring your own keys.
Telemetry is off by default. Opt in if you want to share anything. Every LLM call is inspectable in the Control Center's live feed and on the Statistics Dashboard, so you can see exactly what was sent and to whom.
Run agents end-to-end without a single network call. Local model support covers Ollama with custom model registration. Tool-call parsing recognizes Harmony, Gemma, Qwen, and Hermes XML/JSON formats so open-source local models can use tools just like the hosted ones.
Per-workspace API key isolation keeps Personal, Company, Client A, and Client B credentials separate. One workspace can't see another's keys. Credentials are stored locally with restricted file permissions. MCP credentials cache locally at ~/.voidleap/mcp-credentials.json.
Browser sessions partition per working directory, so cookies don't leak across projects. Optional sandbox mode runs commands inside a macOS Seatbelt profile with kernel-level write restrictions for high-stakes work.
Workspaces
One config, three scopes, total flexibility.
Three scopes drive every config in the app: Global, Workspace, Project. Set a default model globally. Override it for a Client Work workspace with its own API key and security rules. Override again for a project that needs a different model or stricter security.
Workspaces aren't a folder grouping. They're a layer of the configuration system. Agents, skills, tools, MCP servers, hooks, models, providers, security, automations, marketplaces, quick actions, keybindings — all of them merge along the same hierarchy.
The Library shows you which scope owns which value, and what the merged result is. No guessing which file or env var won. Switch the active scope once and the Library, Dashboard, Control Center, and the marketplace install target all follow.
Workspace-level isolation gives each workspace its own API keys, security rules, and marketplaces. Personal stays separate from Company. Client A stays separate from Client B.
On top of the config layer sits the Workspace → Project → Thread organization. A workspace switcher modal is one keystroke away. Each thread carries its own model, agent, mode, worktree, and browser session. Merged-scope configuration is a first-class concept here, not a workaround.
Context Management
Real context engineering. Not a 'compact' button.
Three cleanup operations, each with a clear job. /trim takes about a second and removes paired tool calls. /prune is AI-assisted and clears stale text rounds. /compact summarizes old history when you can't afford to lose it.
A live XP bar segments your context by category — system, conversation, artifacts, files — against the model's actual limit. The Context Panel below it breaks every round into typed entries: user text, assistant text, thinking, tool calls, subagents, images, documents.
Auto-compaction triggers at 75% during long agent loops. Your last three user turns are protected and never auto-removed. Bash output is stored as artifacts and only shipped to the model when it's relevant.
AI Prune Suggest mode shows you what it wants to remove before applying. Clear markers hide everything before a point in the conversation. Per-agent memory blocks persist in named sections (## Project Setup, ## User Preferences). @-mention syntax attaches files, symbols, or threads as fresh context.
Three Anthropic cache breakpoints — static system, history boundary, current turn — push hit rates into the 85–95% range. Attached files become their own cache breakpoint, so unchanged files stay cached across turns. Cache hit rate is visible in the dashboard so you can tune your spend.
Every cleanup operation, every protected round, every cache breakpoint, every artifact toggle is surfaced as UI. Nothing is buried in logs. You see what the agent sees.
Statistics Dashboard
Every cost, every token, every latency, every cache hit — charted live.
Thirty named charts and tables. Cost trend, cost per request, token volume, latency distribution, cache efficiency, tool success rate, RPM, TPS — all live, accurate to the millisecond, never sampled.
Filter by All, Workspace, or Project. Set any time range. Click any chart to expand it fullscreen. Export to file when you need to share.
Comparison tables stack agents and models side by side so you can see which one earns its budget. The Recent Threads table and Steps Per Thread give you the per-thread story behind any spike.
The dashboard runs off the same event stream as the chat, so what you see is what actually happened, down to the millisecond. Token bills become an answerable question, not a mystery line item.
Code Indexing
Symbol-level navigation. Not text search.
Most coding agents fall back to grep. Voidleap maintains a persistent symbol index across fifteen languages and exposes it as four named tools that both you and the agent can call.
SearchSymbols finds any function, class, type, or method by name. FindReferences walks every caller, type ref, and import, with high / medium / low confidence tiers. FileOutline returns a file's symbol tree with line ranges so the agent reads only the function it needs. GetSymbol fetches full source by ID, with a drift-safe re-scan if the file changed.
Fifteen languages: TypeScript, JavaScript, Python, Go, Rust, Java, C, C++, C#, Ruby, PHP, Kotlin, Swift, Scala. Indexed symbol kinds: classes, interfaces, enums, functions, methods, types, variables, properties.
BM25-ranked search means typing 'auth' returns getUserAuth, TokenAuthClient, authenticate() in order of relevance, not alphabetical. camelCase and snake_case fuzzy matching are on by default.
The index is persistent and stays in sync with a debounced watcher, so it doesn't redo work between requests. Symbol IDs are drift-safe via signature-hash re-scan, so they survive code edits. Cross-file reference tracking carries confidence tiers and the enclosing-symbol context.
Auto-built per project. No setup, no manual index command. Scales past fifty thousand files without loading the codebase into context. Used by every agent — Explorer, Oracle, Build, Plan all call these tools instead of grep.
Performance & Engineering
Long agent threads stay smooth.
Engineered, not assembled. The editor, virtualizer, chat surface, layout system, configuration UI, dashboard, and marketplace are all custom-built for AI work — no fork, no off-the-shelf harness. That's why the bundle stays small, the virtualizer handles streaming without jitter, and the layout composes around AI panels (chat, browser, subagents, artifacts) instead of code panels.
The virtualizer is purpose-built. The streaming message renders in normal document flow so its height drives scrollHeight directly, instead of fighting React state. A dual-snap stick-to-bottom uses useLayoutEffect plus ResizeObserver to keep streaming text tracking the viewport. Once you scroll up, the viewport stays put even as content streams in below — no auto-scroll fighting.
Heights are predicted before rendering. A pretext measurement engine models the full message DOM tree and uses canvas measureText for word-by-word widths. Heights are cached and invalidated cleanly on font-size change, container resize, or entry toggle. Predict-then-measure means the virtualizer populates instantly from predictions, and ResizeObserver corrects mismatches without flicker.
Expanding a code block while you're scrolled up doesn't move the viewport. Above-viewport item resizes don't cascade into a jump. Thread switching is wrapped in React's <Activity>, so DOM, scroll position, and in-flight streaming content all survive a tab change. Switch back and the background thread is caught up.
Streaming is multi-thread aware. A 100ms text-buffer flush keeps output smooth without overwhelming React. A module-scope SSE singleton routes events to all open threads in parallel. A bootstrap endpoint collapses 5 startup HTTP requests into 1. Auto-reconnect replays from lastEventId. Thread history pages lazily, 50 messages at a time.
CodeMirror 6 is roughly five times lighter than Monaco (~400KB vs ~2MB+). Lexical handles markdown — Meta's editor framework, designed for performance at scale. The Bun server runs native TypeScript with no transpile step, and the code index lives in embedded SQLite, no external DB process.
Thousand-message threads with hundreds of tool calls stay scrollable. Voidleap was engineered around that scale from day one.
Control Center
Mission control for every running agent.
A live LLM call feed shows prompts, responses, and tool calls as they happen, across every thread. You see the actual content streaming, not just status indicators.
Active threads are grouped by status: Running, Needs Input, Idle, Errored. Pause, resume, or stop any thread from one screen.
Filter by workspace or project. Jump to a thread in one click. Auto-refresh every second. Summary metrics across the top show current load, today's spend, and the active agent count.
Multi-Panel Dockview
An IDE layout, built for AI work.
Drag, split, and pin any panel like an IDE. Panel types include Chat, File View, Browser, Terminal, Subagent timelines, and Artifacts.
Tab groups support reorder and popout. Layouts persist per project. A right-side panel holds collapsed views.
An inner dockview powers the Library's detail view (Details, Code, Editor tabs). The Subagent panel shows nested timelines with per-subagent token accounting. The Artifact panel lets you inspect tool reads, writes, and plans.
A custom tab renderer adds close buttons, kebab menus, and badges where they actually help.
Multi-Agent Orchestration
Specialists you can shape — not generalists you can't.
Two primary agents ship as starting points: Build and Plan. Cycle between them with Shift+Tab. Both are customizable or replaceable from the Library.
Thirteen named subagents come pre-defined as starting points, each authored with the same frontmatter you can use for your own. Oracle is the deep-reasoning specialist on a heavy model with up to 20k thinking tokens. Explorer is the codebase search specialist on a fast model, parallel-safe. Librarian handles web research and documentation lookups. Planner runs multi-perspective architecture analyses. Looker handles screenshots and PDFs. History-search recovers prior work. Plus Commit-message, Find-query, Generate-title, Plugin-auditor, Prune-context, Summarize-conversation, and Trim-artifacts.
Author your own subagents with the same depth: per-agent model and effort budget, allowed tools and MCP, context isolation mode, input mode, custom instructions, and provider-specific variants that resolve automatically.
Three context isolation modes per subagent: none, conversation, fork. Inter-agent messaging is type-safe, with blocking request/reply that doesn't expose prompt-injection risk. Spawn multiple subagents in parallel and watch them stream in the Subagent panel.
Four execution modes: Default, Read-only, Careful (approve every action), Yolo (auto-approve). Cycle with Tab.
Bring Any Model
Eight providers. Local-first. The goal is to support everything worth using.
Eight providers out of the box: Anthropic, OpenAI, Google Gemini, GitHub Copilot, Ollama, OpenRouter, Azure, Bedrock.
Bring your own API key for each provider, with one-click validation. Local Ollama config supports custom model registration so you can run agents without a cloud round-trip.
Model tiers (light, medium, heavy, image) route to the right model per provider automatically. Per-thread model and effort overrides let you switch mid-conversation. Per-workspace credential isolation keeps different work on different keys.
Local model tool-calling parity recognizes Harmony, Gemma, Qwen, and Hermes XML/JSON formats, so open-source local models can use tools just like the hosted ones.
Provider-specific thinking is mapped to one effort scale (low, medium, high, max, xhigh). Adaptive thinking on Claude 4.6+, reasoning effort on the o-series, thinkingLevel on Gemini 3 — all unified.
Credentials and prompts go directly to the provider you picked. No data flows through Voidleap.
Built-in Browser
Real DOM automation, with you watching.
A live webview the agent drives and you watch. Ten browser tools: Open, Navigate, Click, Type, Screenshot, Evaluate JS, Get Content, Get Logs, Set Viewport, Close.
Device viewport presets cover responsive layout testing. Per-thread session isolation means cookies don't leak between projects. Worktree session inheritance copies cookies into new worktrees automatically.
Solve a 2FA prompt or CAPTCHA yourself, and the agent picks up where it left off.
Console log capture keeps the last 100 messages available to the agent.
Worktrees & Checkpoints
Each thread gets its own working copy.
Worktrees as a group, or unique per thread.
Checkpoint history lets you restore the working copy to any prior state. A conflict resolver UI handles merges. An env editor handles per-worktree environment variables.
A branch picker modal and a PR creation dialog mean you can open a pull request straight from the thread.
Fork any thread off any point. The fork inherits settings and shares the worktree.
Automations & Hooks
Schedule agents. Trigger on events. Define your own shortcuts.
Custom cron-based automations come with templates and run logs. A visual cron builder is included for users who don't write cron syntax. Automations are scoped per workspace or per project.
The custom hooks system has an event selector (file change, tool call, agent start/complete, more) and a handler form for custom shell or skill invocation. Per-scope handler chains let one event fire different handlers depending on where you are.
Custom Quick Actions are per-project pinned prompts with custom icons. Custom slash commands bind any skill to /your-command.
Six slash commands ship out of the box as starting points: /plan, /build, /fork, /trim, /prune, /compact. Copy any of them, modify the binding, and make it yours.
Granular Security
The agent only does what you let it do.
Visual editors for command security and path security. Author your own allow/deny rules at any scope. The three-scope hierarchy (Global, Workspace, Project) shows you the merged result before the agent acts.
Four execution modes: Default, Read-only, Careful (approve every action), Yolo (auto-approve).
AST-aware command parsing understands paths, redirects, pipes, subshells, and heredocs. It's not a naive blocklist. PowerShell parity covers scriptblock recursion.
Sensitive-path defaults are on out of the box (.ssh, .env*, .aws, .gnupg, bash_history). Extend or override per scope. Read-before-write is enforced — the agent has to read a file before editing it. Real-path resolution defeats symlink escapes.
Optional sandbox mode runs commands inside a macOS Seatbelt profile.
Editor & Annotations
Editors that don't bloat the bundle or stall the thread.
CodeMirror 6 powers the code editor — about five times lighter than Monaco (~400KB vs ~2MB+). No web workers, no CDN, instant startup. Lezer parsers handle syntax highlighting. Language packs lazy-load. CM6 Compartments allow theme, font, language, and read-only state to reconfigure dynamically without recreating the editor.
Code intelligence is built in: hover tooltips, Ctrl/Cmd+Click go-to-definition, document highlights. Server-side lint integration is wired through setDiagnostics, driven by validation events from the agent.
Diff and merge view ship in two modes: split (two synced editors side by side) and inline (overlay diff in one editor). A custom Vercel 2024 theme is tuned for long sessions. An error boundary catches init failures with fallback content.
Lexical powers the markdown editor — Meta's framework, designed for performance at scale. A plugin architecture composes editor ref, editable state, autofocus, annotations, slash menu, mention menu, and table actions.
A slash menu inserts blocks (/heading, /code, /table). A mention menu attaches files, symbols, or threads as context. The annotation system overlays comments on highlighted text — selection toolbar, threaded discussion, context menu, all positioned by Lexical mark plugins. No DOM-walking hacks.
Live conversion to and from markdown happens on every change, no save round-trip. A table size selector handles quick inserts. LaTeX math and syntax highlighting render in the preview.
Marketplace & Bundle Editor
Build and publish your own. Install community plugins.
The Bundle Editor is the customization story made shareable. A visual manifest editor. Pick items to include from your existing skills, agents, or MCP servers. A built-in git workflow handles commit, push, and release. One click to create a GitHub repo. Another to submit to the Voidleap Directory. Drift tracking keeps sidecars on copied items so they sync on update.
A curated Voidleap Directory ships with verified plugins. Add custom marketplaces by HTTP URL. Tabs cover All, Skills, Agents, Connectors, Plugins, and Installed. Search and filter inside any marketplace.
Each plugin has detail panels for README, Manifest, Files, and Security. Every install runs through a security review first. A static analyzer flags binaries, symlinks, reverse shells, credential exfiltration, prompt injection, and unpinned refs. An on-demand AI auditor handles deep review when you want it. Verdicts: safe, suspicious, malicious.
Cross-format compatibility means you can install plugins in multiple manifest formats natively. No rewriting required.
Update notifications surface as amber badges in the sidebar. Per-scope install target lets you drop a plugin into Global, Workspace, or Project.
Notifications & Activity
You always know what your agents are doing.
A notifications bell collects grouped alerts. A toast manager handles in-the-moment feedback.
An activity indicator shows what the agent is doing right now: idle, thinking, streaming, tool, asking, error. A full activity log keeps the history.
Find-in-page works on long threads. A connection status banner makes reconnection state visible.
Onboarding & Polish
A desktop app that respects you.
A guided tour overlay welcomes first-time users. The EULA shows up on first launch. Release notes appear after updates.
Light and dark themes. A TTS section in settings reads assistant replies aloud.
An internal style guide page (/design route) ships with the app, in case you want to extend the UI yourself.
Auto-update is handled with signed releases over Cloudflare R2.
Get early access
We're in early access right now, letting people in a few at a time. When there's room, you'll get a time-limited invite to subscribe. Once you're in, we'll invite you to the discussions shaping what we build next.
We only email about your invite. No newsletter, no marketing.