Sentinel
A second AI that reviews every response before it reaches you and asks one question — did the assistant actually check, or is it guessing?
Sentinel is a verification layer that runs after every AI response. Before anything reaches you, a second AI reviews the response and asks one question: "Did it actually check, or is it guessing?"
If the assistant claims your database has a certain column, or your webhook fires a certain way, or your system is configured in a particular way — and it didn't verify that by reading the code or querying the system first — Sentinel blocks the response and forces it to check before speaking.
It catches the most dangerous kind of AI error: confident statements about things it hasn't looked at.
Sentinel is a post-generation stop hook that fires after every assistant turn across all workspaces. It parses the conversation transcript, extracts the assistant's latest response and the tools it called (file reads, database queries, API calls, web searches), then sends both to a fast classifier model (Claude Haiku).
The classifier applies a single test: does this response contain claims about live system state that weren't backed by a verification tool call in this turn or the previous five? If yes, the hook exits with code 2, which blocks the response and injects a correction instruction forcing the assistant to verify before restating the claim.
If the assistant did use verification tools, the hook exits immediately without calling the classifier — zero latency cost on verified turns.
Fails open: if the API key is missing, the classifier times out, or the transcript can't be parsed, the response goes through unblocked.