Source Available

Your agents decide fast.
The reasoning is lost after every session.
Lerim captures the why.
No agent starts blind.

The context graph layer for coding agents. Watches sessions across Claude Code, Cursor, Codex, and more. Extracts the decisions, the reasoning, and the why. Makes it available to every agent, every session.

Claude Code
Claude Code
Cursor
Copilot
OpenAI Codex
OpenAI Codex
Kiro
Kiro
Windsurf
Windsurf
Aider
Aider
VS Code
Cline
Cline
OpenCode
OpenCode
Zed
Amp
Amp

AI agents decide fast, but the reasoning is lost after every session. Decisions get re-debated. Patterns get re-discovered. The why behind every choice is scattered across agents, lost between sessions.

Everyone stores memory. Nobody extracts the reasoning. Lerim does.

Your agents learn.
Even while you sleep.

01

Agents work. Lerim watches.

Every coding session across Claude Code, Cursor, Codex, or any supported agent is automatically captured. No manual tagging, no workflow changes.

02

The reasoning is extracted.

LLM pipelines turn raw conversations into structured knowledge: the decisions made, the reasoning behind them, and the pitfalls learned. Stored as plain markdown in your repo.

03

Every agent starts with the why.

Any agent can query the context graph. Start a new session in a different tool and it builds on your project's full decision history. Not just what was done, but why.

Everything you need to
capture and share the why

Plain markdown, no lock-in

Memories live as human-readable markdown in your repo. Git-friendly, portable. git diff what your agents have learned.

Every agent, one graph

Claude Code, Codex, Cursor, OpenCode. What one agent learns, every agent recalls. One shared context graph across all your tools.

Ask in plain language

"Why did we choose Postgres?" Get evidence-cited answers drawn from your project's full decision history.

Runs in the background

Syncs new sessions automatically. Extracts, deduplicates, and refines context continuously with zero manual effort.

Context graph

Decisions and learnings are linked, not isolated. Agents follow connections to discover related context. Explore the graph visually in the dashboard.

Consolidation and decay

Merges duplicates, consolidates related learnings, and archives stale entries. Context stays sharp, not cluttered.

Install once. Agents keep learning.

Recommended

Install as a skill

Add Lerim as a skill to any supported agent. It runs in the background automatically -- syncing sessions, extracting learnings, consolidating overlap, and forgetting low-value noise with zero manual effort.

$ npx skills add lerim-dev/lerim-cli
  Installed lerim skill.
  Works with: Claude Code, Cursor, Copilot, Codex...

Your agent now knows how to use Lerim. It can recall relevant context before a task and write back new learnings after it finishes -- across every session and every tool.

Or use the CLI

Full control from the terminal. Set up once, start the service, and query your project's learning history directly.

$ pip install lerim
$ lerim init
  Detected: claude, codex
$ lerim project add .
$ lerim up
  Lerim is running at http://localhost:8765
$ lerim ask "Why SQLite over Postgres?"
  Decision [dec-041]: Chose SQLite for the index...

Plain markdown.
No lock-in.

Your agent learning records live as human-readable markdown files in your repo. Version-controlled, portable, and future-proof.

git diff what your agents learned.

.lerim/
memory/
  decisions/    # architectural choices
  learnings/    # patterns & pitfalls
  summaries/    # session records
  archived/     # retired entries
config.toml       # project settings