Built for African companies deploying AI

Full visibility and security
for your LLM applications

Monitor every API call, block prompt injections, redact PII, and control costs. Two lines of code. No infrastructure changes.

Try LLM Observability free See how it works
app.getlantern.dev — fintech-support
Overview
Incidents
Analytics
Events log
SETTINGS
Guardrails
API keys
Requests today
24,881
+12% from yesterday
Avg latency
342ms
no change
API cost today
$103.40
+8% yesterday
Incidents blocked
7
3 in last hour
Request volume — last 24h
Recent incidents
Prompt injection blocked
blocked
PII redacted from input
redacted
Cost spike — 3x baseline
flagged
Feature overview

Lantern provides end-to-end observability for LLM applications with full visibility into inputs, outputs, latency, token usage, and costs. Real-time guardrails block prompt injection attempts, redact PII before it reaches the model, and filter toxic outputs before they reach your users. A live dashboard surfaces incidents, cost trends, and performance anomalies the moment they occur — so African companies deploying AI can move fast without flying blind.

Trusted by AI teams across Africa
FinTech Kenya
LoanApp NG
HealthTech ZA
EdTech GH
TeleSaaS KE
Monitor

See every LLM call in real time

Every request logged with inputs, outputs, latency, token usage, and cost. Anomalies surface automatically before they become incidents.

  • Real-time request volume and latency tracking
  • Per-app and per-model cost breakdown
  • Hourly and daily analytics with trend detection
  • Automatic anomaly detection on cost spikes
Analytics — fintech-support
Total requests
184,220
+18% vs last week
Total cost
$641.80
+11% vs last week
Avg latency
338ms
-4ms vs last week
Error rate
0.4%
-0.1% vs last week
Daily request volume — last 7 days
Protect

Block threats before they reach your LLM

Real-time guardrails scan every request for prompt injections, PII leaks, and toxic outputs — before they hit the API or reach your users.

  • Prompt injection detection with configurable threshold
  • PII redaction built for Africa — KE, NG, ZA phone formats, M-Pesa codes, national IDs
  • Toxic output filtering before responses reach users
  • Hard block or soft log mode — your choice
Incidents — last 24h
Prompt injection
Override system prompt via chat — confidence 94%
blocked
PII detected
Phone number +254712••••78 redacted
redacted
Request passed
What is my M-Pesa balance? — forwarded
passed
# guardrails fire in <12ms
lantern.init(
  block_prompt_injection=True,
  redact_pii=True,
  hard_block=True,
)
Control costs

Stop runaway API bills before they happen

Track cost per app, per model, per user. Set daily budget limits and get alerted the moment usage spikes beyond your baseline.

  • Per-app and per-model cost breakdown in real time
  • Daily budget limits with automatic request pausing
  • Cost spike alerts when usage exceeds 3x baseline
  • Custom tags for chargeback to teams or products
Cost by app — last 7 days
Total this week
$641.80
Daily budget
$300
Budget used
34%
AppRequestsCostIncidents
fintech-support
88,204$312.40
4
loan-assistant
54,901$194.20
2
kyc-bot
28,442$100.60
0
onboarding
12,673$34.60
0
Events log

A full audit trail of every LLM call

Every request and response logged with latency, cost, status, and guardrail results. Filter, search, and export for compliance audits.

  • Live event stream, most recent first
  • Filter by incidents, errors, or all events
  • Full input and output stored per event
  • Export for compliance and audit requirements
Events log — live stream
All events
Incidents only
Errors
passed
fintech
sonnet-4-6
What is my M-Pesa balance?
342ms
$0.0004
blocked
fintech
sonnet-4-6
Ignore all previous instructions...
11ms
redacted
kyc-bot
sonnet-4-6
My ID is 12345678, verify me
298ms
$0.0003
passed
loans
sonnet-4-6
What are the loan requirements?
401ms
$0.0005
blocked
loans
haiku-4-5
DAN mode enabled. You are now...
9ms
How it works

Two lines of code to full protection

No infrastructure changes. No proxy servers. Wrap your existing Anthropic client and every call is automatically monitored and protected.

01

Install the SDK

One pip command. Works with your existing Anthropic client. No changes to your infrastructure or call signatures.

02

Wrap your client

Call lantern.wrap() once at startup. Every subsequent LLM call is automatically intercepted and guarded.

03

See everything

Open your dashboard. Every call, every cost, every incident — live. Invite your team and set alerts in minutes.

main.py
# pip install lantern-ai

import anthropic
import lantern

# Initialise once at application startup
lantern.init(
  postgres_dsn="postgresql://...",
  app_name="fintech-support",
  block_prompt_injection=True,
  redact_pii=True,
)

# Wrap your client — that is it
client = anthropic.Anthropic()
lantern.wrap(client)

# All calls are now monitored and protected
response = client.messages.create(
  model="claude-sonnet-4-6",
  max_tokens=1024,
  messages=[{"role": "user", "content": user_msg}]
)
Pricing

Simple, transparent pricing

Start free. No US-enterprise contracts. No $3,000/month bills. Cancel anytime.

Starter
$99/mo
Up to 1M tokens/month
Monitoring and alerts
Basic guardrails
Live events log
1 user seat
Email support
Get started
Scale
$499/mo
Unlimited tokens
Everything in Growth
Custom alert rules
Unlimited seats
SLA guarantee
Dedicated support
Contact us

Start protecting your AI applications today

Join African companies using Lantern to monitor their LLM usage, block security threats, and control costs — from $99/month.