VIA is a privacy-first AI reflection layer that introduces thoughtful friction before high-risk digital actions.
How VIA protects you
Catches rage, slurs, and impulsive drafts before they leave your keyboard. No block β just a pause and a better option.
Try it in chatβSurfaces bank-fraud and family-emergency scam patterns the moment a message arrives. Shows you which signals match.
See scam chatβLink microscopy explains what a URL actually does before you click it β tracking domains, redirects, suspicious punycode.
Inspect a linkβA financial surface with real-time risk scoring, transfer friction, and a timeline of every flagged action.
Open dashboardβHow VIA works
VIA watches the text you're composing β on your device, without sending it anywhere. Every keystroke runs through lexicons for rage, scam compliance, and slurs.
When a signal fires, we ask a local Ollama model to explain the risk in plain language. No cloud roundtrip. No draft ever leaves your machine.
You get a 90-second pause, a softer rewrite, or a revision prompt. You can always send anyway β the decision stays yours, with more information.
The moment a draft leaves your device, you've trusted a vendor. VIA runs detection on a self-hosted Ollama instance you control. The local model sees your draft; we don't. Persona replies (the βtrollβ, fake bank, etc.) are the only part that hits an external LLM, and that layer is opt-in, disclosed, and scripted-fallback-protected.
~90s
is how long it takes for a rage spike to pass
from field observation of intervention replays
3+
VIA signals match most reviewed scam messages
urgency Β· secrecy Β· authority claim is the common triad
0
chat bytes leave your device by default
detection runs on a self-hosted local model
VIA is open infrastructure. Spin it up locally, point it at Ollama, and start reflecting on the messages you were about to regret.
Open VIAβ