# Open source

# OpenTelemetry native

# 10 minutes to first issue

Automatic issue detection for AI agents

Trace your agent in production. Latitude finds the failures and writes the evals

Agents fail differently

They don't only crash. They hallucinate, lose context, call the wrong tool, and confidently return the wrong answer. You need more than logs to catch that.

Search through thousands of traces

Find the exact step where your agent went wrong. Filter by error type, model, user, time range.

Automatic failure clustering

Latitude groups similar failures into issues without you configuring anything. No rules, no regex.

Evals generated from real failures

Every discovered issue becomes a running eval. New traffic is tested against known failure modes automatically.

The anatomy of an issue

A clear path to reliable AI

Monitor recurring failure modes effectively with issues

Project issues

A clear path to reliable AI

4

A clear path to reliable AI

1

A clear path to reliable AI

Automatic detection

Latitude creates potential issues from your traces automatically. Review and validate them.

Alerts

Get notified when a new issue is detected or an existing one escalates on Slack, email or using webhooks

Hallucinating policy details

Pending review

47

NSFW Speech

Monitoring

Regressing

561

Memory loss

Escalating

1642

User frustration

Escalating

1101

Monitor with evals

Create an eval from any issue. Latitude aligns it to your validated feedback and runs it against new traffic continuously.

Golden datasets

Latitude automatically builds a golden dataset for each issue from your validated traces.

Human signal

Latitude clusters your team's feedback into failure modes so nothing gets lost.

Get started in minutes

Set up Latitude in your project and discover first issues in as little as

$

npx -y @latitude-data/claude-code-telemetry install

FAQ

Asnwer to the most popular questions

What's the difference between LLM observability and regular logging?

Do I need observability if my LLM app is already working fine?

How is Latitude different from Langfuse or other observability tools?

How quickly can I see my first production traces?

What issues will observability actually help me catch?

We're already using OpenAI's dashboard. Why do I need more?

Once I can see issues, then what?

FAQ

Asnwer to the most popular questions

Start finding failures today

Find out why your AI agent is failing soon

Start finding failures today

Find out why your AI agent is failing soon

Proyecto financiado con el apoyo de ACCIÓ - Generalitat de Catalunya

We're

GDPR

compliant

Proyecto financiado con el apoyo de ACCIÓ - Generalitat de Catalunya

We're

GDPR

compliant

Proyecto financiado con el apoyo de ACCIÓ - Generalitat de Catalunya

We're

GDPR

compliant