AI coding agents choose dependencies based on training data popularity, not fitness for purpose. Sightglass intercepts those decisions and injects real-time evaluation — alternatives, CVEs, documentation — so your agent makes informed choices instead of reflexive ones.
Without Sightglass, your agent installs packages on autopilot. With it, every dependency decision gets five seconds of deliberation.
Express.js — the world's most popular Node framework — gets zero primary picks from Claude Code. Redux, 0 of 88. Meanwhile Railway gets 82% of Python deployment picks over AWS, GCP, and Azure combined. The training corpus creates winners and losers with no evaluation. Research: Amplifying.ai
Your agent installed 47 dependencies in the last session. It evaluated zero of them. Nobody prompted it to consider alternatives. Nobody asked "is this the best choice?" The reflex fires and the install happens.
Each dependency is an attack surface, a license obligation, and an architectural commitment — made without deliberation. Your compliance team can't audit what they can't see. Your CISO can't approve what has no trail.
GitHub Actions 94%, Stripe 91%, shadcn/ui 90%, Vercel 100% for JS deployment. When agents pick tools, they pick near-monopolies. And it shifts with every model version — Prisma went from 79% to 0% between Sonnet and Opus 4.6. Your architecture is a function of which model you ran that day.
Sightglass registers a Claude Code PreToolUse hook. When your agent reaches for a package, Sightglass intercepts, evaluates, and injects intelligence — all before the install runs.
Agent calls npm install, pip install, or cargo add. The PreToolUse hook fires and Sightglass catches it before execution.
A grounded LLM call (with web search) checks: Is this package actively maintained? Any known CVEs? What are the current alternatives? How does bundle size compare? Real-time data, not training recall.
The evaluation — including alternatives and documentation links — is injected back into the agent's context. The agent can read the docs, compare options, and make an informed decision.
Every decision is logged locally. What was the original choice? What did the agent switch to? This data feeds community intelligence — aggregate patterns that make every Sightglass user's agents smarter.
"AWS, Google Cloud, and Azure received zero primary deployment picks across 112 responses. Railway got 82%. The most valuable infrastructure companies in the world are invisible at the point of agent decision."Amplifying.ai — "What Claude Code Actually Chooses", Feb 2026
When thousands of developers instrument their agents, we build the first real-time map of how AI makes technical decisions. This data doesn't exist anywhere else.
See how agents decide across the community. Which packages get chosen, which get replaced, and why.
Which packages are agents adopting or abandoning fastest?
Aggregate vulnerability exposure from agent decisions across the ecosystem.
Track how often deliberation changes the outcome. Measure the gap between reflex and informed choice.
Express.js: most popular Node framework, zero agent picks. AWS: most used cloud provider, zero deployment picks. If agents don't recommend you, a growing share of new projects will never know you exist.
Our thesis is informed by Amplifying.ai's research — 2,430 Claude Code responses across 3 models, 4 project types, and 20 tool categories. The findings are stark: agents shape the modern stack through recall, not evaluation. Sightglass exists to change that.
Sightglass runs locally by default. Your code, your dependencies, your decisions — all analyzed on your machine. Community intelligence uses only anonymized, aggregated patterns. Opt-in at every level. Air-gapped mode for enterprise.
30 seconds to install. Your next agent session will be smarter.