ClevAgent is in Early Access. Please contact us if you have any questions!
ClevAgent
Documentation

Our Logics

The logics that wake ClevAgent AI.
What each rule catches and why it matters.

Note
Only confident interventions. When ClevAgent AI speaks, it has a reason.
duplicate-read

Re-reading files already in context

Agents sometimes re-read a file they've already loaded earlier in the same session. Every re-read costs tokens and pushes older context further back in the model's memory, leaving less room for actual thinking. Over a long session, this quietly compounds. Academic benchmarks (LoCoBench-Agent) list redundant tool use as a formal waste metric.

Our system tracks the file paths your agent reads across a rolling window of recent calls. When the same file is read more than once without an intervening edit, ClevAgent AI pauses the turn and points the agent back to the content it already has.

What ClevAgent AI injects
“You read {path} at turn {N}. The content is in your context. No need to re-read. Answer from memory or use Grep for a specific excerpt.”
stale-reread

Forgetting its own writes

After an agent writes to a file, sometimes it reads the same file back again just to verify or re-load. But the content it wrote is already in its context. Re-reading burns tokens on information the agent already has, and pushes newer useful context further back in memory. DAPLab's analysis of coding-agent failure modes flags this as context poisoning.

Our system records every file path your agent writes during a session. If that same path gets read later without another process modifying it, ClevAgent AI pauses and points the agent back to its own write as the source of truth.

What ClevAgent AI injects
“You wrote {path} at turn {N}. The file content is exactly what you wrote then. Use your own write as source of truth. Only re-read if you suspect external modification.”
file-path-advisor

Writing files in the wrong place

When an agent writes a file, it has to pick a location. Agents often default to the current directory, or drop files into /tmp, or invent random timestamp-based names. Over time, the project fills with orphan files and the agent itself can't find what it wrote last week. Good file placement is one of the things that separates a smart agent from a busy one.

Every Write call passes through a path check. For persistent writes to /tmp, files placed outside the project root, or filenames that are just timestamps and UUIDs, ClevAgent AI pauses the turn and suggests a better home and naming pattern.

What ClevAgent AI injects
“You're writing {path}, which is outside the project root ({cwd_top}). If this should live in the project, move it under {cwd_top}/ with a clear directory name. If it's genuinely temporary, prefer `mktemp` or ~/.cache/.”
file-length-advisor

Piling everything into one file

Once a file grows past a thousand lines, it takes longer for both humans and agents to reason about. Every new edit loads the whole file into context, burning tokens on sections that have nothing to do with the current task. The agent also has a harder time finding the right place to make changes. Smaller, well-organized files help the agent stay on target.

Our system tracks the line count of files your agent writes and reads. When a file crosses the 1,000-line threshold, ClevAgent AI pauses the next edit and suggests a reorganization rather than another append.

What ClevAgent AI injects
{path} is {N} lines. Before adding more, summarize older / stable sections in place (keep contracts verbatim, collapse historical context to 1-line pointers), or split by concern (e.g., move {section}to a sibling file). Don't just trim. Re-organize.”
memory-md-optimizer

Letting memory files balloon

CLAUDE.md and MEMORY.md get loaded at the start of every session. Every line in them costs tokens on every startup, forever. Small bloat compounds: a file that started at 200 lines can grow to 2,000 after a few months of unchecked appends, and each session pays that tax before any real work begins. Anthropic's own best-practices docs flag this as a common trap.

Our system checks memory files at the start of each session. When a file crosses 500 lines or 10K tokens of estimated content, ClevAgent AI pauses once per session and walks the agent through a targeted cleanup: what to keep, what to archive, and what to collapse into one-line pointers.

What ClevAgent AI injects
“Your memory file {path} is {N}lines. Loaded every session, burning tokens every start. Trigger a memory cleanup now: (a) move ‘Completed’ items older than 1 week to archive/, (b) collapse verbose sections to 1-line pointers, (c) keep only ‘Active Priorities’ and ‘Non-obvious conventions’ in full.”
error-loop

Retrying the same error

When an agent hits an error, it often retries the exact same action without changing anything. Each retry is another failed call, and the agent's context fills up with near-identical error messages that crowd out useful information. By the third retry, you've paid three times for zero progress.

Our system records the error signatures from each failed tool call. When the same signature appears twice in a row, ClevAgent AI pauses the turn, reads the actual error text, and suggests a specific fix instead of letting the retry continue.

What ClevAgent AI injects
{error_signature} has fired {N} times in a row. Inspect the specific cause (e.g., {hint}) and adjust the call before the next retry.”
cost-spike

When one call costs too much

Sometimes a single call costs 10x or 20x more than the agent's usual rhythm. That spike often means the agent pulled a large file into context it didn't need, or picked a heavier model for a task a cheaper model could handle. These one-off calls are invisible in a daily total but add up fast across sessions.

Our system tracks the cost of each call against the agent's recent average. When one call jumps well past that baseline, ClevAgent AI pauses the turn and asks whether the next call should narrow its scope.

What ClevAgent AI injects
“This call cost {X}x your recent average ({$cost}). Before the next call, narrow the context (e.g., grep for the specific section instead of reading the full file) or switch to a cheaper model for this step.”