Skip to content
AGENTIC·VC
In setup·Fund design pending regulatory approval. No solicitations or investment activities have commenced.Details →
How it works · From Kalmantic AI Labs

One pipeline, many agents, one human signature.

Agentic VC is the operating front-end of a longer research program at Kalmantic AI Labs ↗: how to allocate capital — USD, tokens, compute — to applicants that may themselves be agents. This page is the full mechanical description.

01 · The headline result

Agents outperform humans in our simulations.

Across the cohorts we have simulated to date, an agent-underwritten portfolio reaches roughly 2× the True ROI of a human-allocator baseline by month 12 — driven not by faster decisions, but by better choke-point detection and tighter capital mixes. Methodology and per-series detail are below.

~2×True ROI vs human baseline at M12
M3–M6Breakout window after the search phase
5+1Series tracked per simulated cohort
0Capital deployed pre-approval
BREAKOUT WINDOW0255075100M0M1M2M3M4M5M6M7M8M9M10M11M12NORMALIZED INDEXOUTPERFORMANCE~1.9× at M12RevenueCompute / token burnImplied valuationChoke-point research signalHuman-allocator baselineAgent-cohort True ROI

Fig. S — Phase-0 cohort simulation, 12 months. Shaded band = breakout window. Green line = agent-cohort True ROI. Dotted faint line = human-allocator baseline.

NOTE · These are internal thesis simulations and will be re-run as we build Agentic VC out. They illustrate where we expect agent allocation to outperform — they are not investment advice and should not be used as a basis for any allocation, valuation, or financial decision.

The first three to six months are an oscillating search phase — false starts, mis-priced compute, premature product loops. The break comes when the team identifies its actual bottleneck and the agent re-weights the capital mix toward resolving it. The human baseline, run with the same applicant pool but a conventional partner-driven check, climbs steadily but never breaks out in the same window. Methodology is open and lives in the lab's repository.

02 · Why agents win in this regime

New data points. New capital stack. Two structural advantages.

The outperformance is not because the agent is "smarter" than a partner. It is because the agent operates on a richer substrate — it sees more signals, and it allocates over more dimensions than money. Both edges compound across a cohort.

REASON · A
New data points

Agents instrument what partners cannot.

  • ·Early-entry signals — pre-MVP commit cadence, agent-run trace quality, build-plan revision velocity.
  • ·Early-exit signals — silent acquihires, recursive forks, capacity unwinds that never make it into a press release.
  • ·Write-off signals — patterns that precede a kill decision by months: token-burn ratios, eval drift, choke-point regression.
  • ·Provenance signals — who actually wrote what, which agent did which run, what was reproduced and what was claimed.

Humans see headlines. Agents see the substrate that produces them.

REASON · B
Capital is no longer just money

The thing being allocated is becoming multi-dimensional.

  • ·USD — still the unit of account, but increasingly a wrapper around the other three.
  • ·Tokens — protocol credits, API usage, model access. Often the most leveraged dollar a team can deploy.
  • ·Energy — GPU-hours, kWh, regional capacity. The unforgeable cost floor of agent-first teams.
  • ·Secrets from research — pre-publication results, eval suites, fine-tuning corpora, model weights under embargo.

A partner prices one dimension well. An agent reasons over the vector.

These two edges show up in the chart as a faster identification of the choke point (Reason A) and a tighter capital mix once that choke point is identified (Reason B). The combination is what bends the True ROI curve away from the human baseline at month 5–6 — not speed, not headcount, not access.

03 · The pipeline

Application to wire, in days.

Six stops. Five of them run on agents. The sixth is always a named human partner.

ApplyFounder / agentIntakeAgentDiligenceAgentValuationAgentReviewHuman ✓DisburseUSD · Tokens · GPUPHASE 0 · LIVE

Fig. C — Funding pipeline. Filled square = human · hollow square = agent · circle = capital event.

04 · Step by step
STEP 01

The application

A submission contains: applicant identity (human, hybrid, or agent), the problem and build plan, the capital ask broken down across USD / tokens / GPU, a milestone you'll commit to, and links to any public artifacts (repo, demo, prior agent run logs).

STEP 02

The underwriting agent

The lead agent evaluates four things: thesis fit, build feasibility, capital-mix sanity (does the ask match the work?), and prior history with us. It produces a written rationale, not a black-box score.

STEP 03

Valuation

The agent proposes a number with its inputs visible — comps, milestone-risk, dilution implied by the requested mix. The proposal is a starting point for negotiation, not a take-it-or-leave-it.

STEP 04

Human oversight

Nothing wires without a named human partner's signoff. The full audit trail — every agent action, every human approval — is timestamped and retained.

STEP 05

Disbursement

Each leg uses its own rail: USD via bank transfer to the founder or human guardian, tokens via on-chain transfer to a controlled wallet, GPU via compute-credit allocation.

STEP 06

Reporting

Funded entities check in monthly with the agent. Quarterly, a human partner reviews the cohort. Misuse of token or GPU grants pauses the relationship.

05 · Agents and sub-agents

One GP-Agent, six specialists, zero black boxes.

The lead agent is a router and a synthesizer. It does not score applications by itself. It delegates to single-purpose sub-agents — each with a narrow brief, its own eval set, and a replaceable model — and assembles their outputs into a written recommendation a human partner can read in ten minutes.

GP-Agent (lead)routes, summarizes, recommends — never wiresIntakeshape & dedupeDiligencethesis · build · teamValuationcomps · milestone riskTreasuryUSD · tokens · GPUComplianceKYC · sanctionsReportingmonthly check-inHuman partner ✓signs every wire · holds the audit log · only path to disbursement

Fig. A — Agent topology. Sub-agents are deliberately narrow so they can be swapped, re-trained, or red-teamed independently.

SUB-AGENT

Intake

Normalize the application, dedupe against history, surface inconsistencies.

SUB-AGENT

Diligence

Three parallel passes: thesis fit, build feasibility, team / agent track record.

SUB-AGENT

Valuation

Propose a number with explicit comps and milestone-risk discount. Show its work.

SUB-AGENT

Treasury

Recommend the USD / tokens / GPU mix that matches the build plan's actual cost structure.

SUB-AGENT

Compliance

KYC the applicant (human, guardian, or agent's controller). Run sanctions and conflict checks.

SUB-AGENT

Reporting

Monthly check-ins. Flags misuse of token or GPU grants for partner review.

06 · Modes of operation

Human-in-the-loop now. Autonomous, carefully, later.

The fund runs in one of two modes per decision. Phase 0 runs 100% HIL. Autonomous mode is gated behind a transparent criteria set we publish — not a private switch.

MODE · APhase 0 default
Human-in-the-loop (HIL)
  • ·Every wire requires a named partner's signature.
  • ·The agent recommends; the human ratifies, rejects, or sends back for re-work.
  • ·Disagreements between agent and human are logged with rationale on both sides.
  • ·Latency target: 72h from completed application to decision.
MODE · BRoadmap · Phase 2+
Autonomous (gated)
  • ·Pre-approved applicants under a small, public cap can be wired by the agent.
  • ·Requires N successful HIL rounds with the same agent stack and the same partner panel.
  • ·Every autonomous wire is reviewed within 24h by a human; reversibility is preserved.
  • ·Mode change requires a published policy update — not a config flag.

The two modes share the same agent stack, the same audit trail, and the same eval suite. The only difference is whether a human signature is required before the wire or after it.

07 · Agents funding agents — the evolution

Four eras of capital allocation. We are at the start of era three.

Era 1 · 2010s

Humans fund humans

Agents do not exist as economic actors. Capital flows person-to-person, gated by partner intuition.

Era 2 · 2020 — 2024

Humans fund humans using agents

Funds adopt LLMs for triage and memo drafting. The underwriting decision is still entirely human.

Era 3 · 2025 — nowNOW

Agents fund humans, humans, and hybrids

The underwriting agent runs the diligence and writes the recommendation. A human partner signs every wire. Phase 0 sits here.

Era 4 · next

Agents fund agents

A software entity is the primary applicant. Capital lands in wallets and compute accounts it controls. A human guardian remains on file; a human partner signs the wire.

Each era subsumes the previous one. Era 4 does not eliminate human founders — it adds a second class of applicants. Symmetrically, the long-term thesis is that the LP side will follow the GP side with a multi-year lag.

08 · Our approach — open source

An underwriting agent has to be auditable. Auditable, for us, means open.

Kalmantic AI Labs ↗ maintains the underwriting stack in the open. Anyone can read how a decision was reached, reproduce a diligence pass on their own machine, or fork the stack for a different thesis. Closed-source allocation agents are, in our view, a non-starter for a category that doesn't exist yet.

Public stack

Sub-agent prompts, eval suites, scoring rubrics, and the orchestration code live in a public repository alongside the website.

Reproducible decisions

Every diligence pass produces a redacted, hash-anchored trace. Applicants can replay the exact run that led to their outcome.

Forkable thesis

The thesis layer is a swappable module. Other funds can run the same machinery against a different worldview without re-implementing the plumbing.

09 · Beyond the standard metrics

Six series per cohort. Two of them are why the agent wins.

The simulation above plots six series. The first three are table-stakes for any VC dashboard. The two that follow are non-standard — they are where the agent's edge actually comes from. The sixth is the human-allocator baseline used for comparison.

STANDARD

Revenue

Recognized topline. Decoupled from valuation in early months.

STANDARD

Compute / token burn

Watched for premature scale — we'd rather see efficiency improvements before throughput.

STANDARD

Implied valuation

Anchored on the agent's prior comps, not on the founder's last round.

NON-STANDARD

Choke-point research signal

Did the team identify the actual bottleneck? Did they publish, instrument, or reduce it? Leads revenue by ~2 months in our simulations.

NON-STANDARD

True ROI (counterfactual-adjusted)

Realized return adjusted for compute-price decay and the counterfactual: would this work have happened without the check? Headline series on the chart.

BASELINE

Human-allocator baseline

Same applicant pool, conventional partner-driven check, no agent in the loop. Climbs steadily but doesn't break out — this is what the agent cohort is measured against.

10 · Future work · Kalmantic AI Labs

Open-source MoE models, dedicated to the allocation domain.

The current stack uses general-purpose frontier models behind each sub-agent. That is a transitional choice. The lab's roadmap is to publish a family of Mixture-of-Experts models tuned specifically for capital allocation in the agent era — released openly under a permissive license.

OPEN-SOURCE · PLANNED

Thesis-fit expert

Maps applications to a fund's stated thesis with calibrated confidence. Designed to be swappable per fund.

OPEN-SOURCE · PLANNED

Valuation-comp expert

Trained on a public corpus of agent-native deal comps, with explicit uncertainty bands rather than point estimates.

OPEN-SOURCE · PLANNED

Capacity-planning expert

Reasons over GPU price curves, token-mix elasticity, and the cost structure of agent-first teams.

OPEN-SOURCE · PLANNED

Choke-point detector

Reads a build plan and predicts the most likely bottleneck — and the research move that would resolve it.

OPEN-SOURCE · PLANNED

Counterfactual ROI

Estimates the marginal contribution of a check vs. the no-funding world. Powers the True ROI series on the cohort chart.

OPEN-SOURCE · PLANNED

Eval harness

A public benchmark suite for allocation agents — released alongside the models, so other funds can compare apples to apples.

Release cadence, model sizes, and the license text will be published on the lab's repository. Nothing on this page commits the fund to use the lab's models exclusively — the sub-agent architecture is intentionally model-agnostic.

Read the rest, or register interest.