PsiGuard Documentation

Everything you need to integrate real-time AI monitoring into your applications.

Overview

PsiGuard is a real-time AI monitoring layer that sits between your application and any large language model. It analyzes every AI response as it's generated and provides instant feedback about the cognitive quality and reliability of the output.

Think of it as a seatbelt for AI. The AI does its job — PsiGuard makes sure it doesn't go off the road.

PsiGuard doesn't replace your AI model. It monitors it. Your existing setup stays exactly the same — PsiGuard adds a safety layer on top.

Quick Start

Getting started with PsiGuard takes less than five minutes:

1. Create an account

Sign up at psiguard.net/login. Free accounts include a generous monthly allowance of monitored conversations.

2. Open the Console

Once logged in, you'll land on the PsiGuard Console — a real-time monitoring dashboard. Select your AI model from the model selector in the top bar.

3. Start a conversation

Send a message. PsiGuard begins monitoring immediately. You'll see four cognitive metrics update in real-time on the right panel, along with an overall state assessment.

4. Toggle Protection

Enable "Protection Active" at the bottom of the chat to turn on active monitoring. When enabled, PsiGuard will flag or intervene when it detects unreliable output.

Console Dashboard

The PsiGuard Console provides a real-time view of your AI's cognitive behavior. The interface includes:

Conversation panel — Your chat with the AI, with safety badges on each response showing its assessed state.
Cognitive Monitor — A live-updating chart showing four metrics over time, plus an overall state indicator.
Model selector — Switch between connected AI models.
Export — Download conversation data and metrics for audit or analysis.

The Four Metrics

PsiGuard evaluates every AI response across four cognitive dimensions. Each metric ranges from 0.0 to 1.0.

Coherence

Is the response logically consistent? High coherence means the output holds together well. A drop may indicate the AI is generating content it's less certain about.

Drift

Is the AI staying on track? Low drift means the response aligns with what was asked. Rising drift can signal the AI is wandering from your question or introducing tangential content.

Entropy

How uncertain is the AI? Low entropy means focused, confident output. High entropy suggests the AI is hedging or spreading probability across many possibilities — a precursor to hallucination.

Stability

Is the AI maintaining consistent behavior across the conversation? High stability means it's grounded. Dropping stability can indicate the AI's cognitive state is shifting mid-response.

Tip: Click the ? button next to "Cognitive Monitor" on the dashboard for a quick reference of what each metric means.

State Classifications

PsiGuard combines the four metrics into an overall state assessment for each response:

StateIndicatorWhat It Means
Stable✅ GreenThe AI is behaving normally. Output is reliable.
Watch⚠️ YellowSomething shifted. PsiGuard is paying closer attention. Output may still be usable but should be reviewed.
Warning🔴 RedPsiGuard has detected a significant cognitive anomaly. Output should be treated with caution or blocked.

The Risk percentage reflects PsiGuard's overall confidence that the response is unreliable. Lower is better.

Baseline Warm-Up

When you start a new conversation, PsiGuard needs a few messages to learn the AI's normal behavior for this specific session. During warm-up, you'll see a "Building Baseline" indicator with a countdown. Metrics during this period are preliminary.

After the baseline is established (typically 3-5 messages), monitoring becomes fully calibrated for that conversation's context and model.

Supported Models

PsiGuard works with any LLM that produces text output. Currently supported through the Console:

OpenAI — GPT-4o, GPT-4, GPT-3.5 Turbo
Anthropic — Claude Sonnet, Claude Haiku
Google — Gemini Pro, Gemini Flash
Local models — Any model accessible via API endpoint

PsiGuard is model-agnostic. It monitors the output, not the internals. If your model produces text, PsiGuard can monitor it.

Protection Mode

The "Protection Active" toggle at the bottom of the chat enables active intervention. When enabled, PsiGuard doesn't just monitor — it can modify or block responses that exceed risk thresholds.

Protection OFF: Monitor-only mode. PsiGuard scores every response and updates the dashboard, but does not interfere with the AI's output. Useful for testing and observation.
Protection ON: Active mode. PsiGuard may append safety notes, flag problematic content, or substitute a safer response when the cognitive state crosses into Warning territory.

API Reference

PsiGuard provides a RESTful API for programmatic integration. All endpoints require authentication via Firebase token.

Authentication

Include your Firebase ID token in the Authorization header:

Authorization: Bearer <your-firebase-id-token>

Create a Thread

POST /api/threads // Body { "model": "openai-gpt4o" } // Response { "thread_id": "abc123...", "title": "New Conversation" }

Send a Message (Streaming)

POST /api/threads/{thread_id}/stream // Body { "message": "Your prompt here", "ood_enabled": true } // Returns: Server-Sent Events stream with text chunks and metric updates

Get Conversation History

GET /api/threads/{thread_id}/messages // Returns array of messages with timestamps and metric snapshots

List Threads

GET /api/threads // Returns list of user's conversations, sorted by most recent

Kairo AI Assistant

Kairo is PsiGuard's consumer AI assistant — a chatbot protected by the full PsiGuard monitoring engine. It's available as a web app and a WordPress plugin that web designers and agencies can embed on client sites.

Every response Kairo generates is monitored in real-time by PsiGuard. If the underlying AI drifts or hallucinates, Kairo's built-in protection catches it automatically — no manual oversight needed.

Ecosystem

PsiGuard is a platform, not just a dashboard. Any application built on PsiGuard inherits real-time cognitive monitoring as a core feature. Current ecosystem products include Kairo (AI assistant), MarketPulse (financial AI), and TinySpark (social AI agent).

Questions, Answered

Does PsiGuard slow down my AI?

No. PsiGuard's analysis runs in parallel with the AI's response stream. You'll see metrics updating in real-time as the response is being generated, with no perceptible delay to the user.

What data does PsiGuard store?

PsiGuard stores conversation history and metric snapshots in your authenticated account via Firebase. Your data is never shared with third parties or used to train any models. See our Privacy Policy for details.

Can PsiGuard prevent all hallucinations?

No monitoring system can prevent 100% of AI errors. PsiGuard significantly reduces the risk by detecting cognitive anomalies in real-time and intervening before unreliable output reaches your users. Think of it as a seatbelt — it dramatically improves safety, but responsible use of AI is still important.

How does PsiGuard detect hallucinations?

PsiGuard uses a proprietary cognitive monitoring framework to evaluate AI responses across multiple dimensions in real-time. The specific methods are part of our intellectual property and are not publicly disclosed.

Does it work with fine-tuned or custom models?

Yes. PsiGuard monitors the output behavior of any text-generating model, regardless of how it was trained. No access to model weights or internals is needed.

Usage Limits

Free accounts receive a monthly allowance of monitored API calls. When you reach the limit, you'll see an upgrade prompt in the Console. Pro accounts have unlimited monitoring.

You can check your current usage anytime at Account → Usage & Limits.