Keywords AI is an LLM monitoring platform for AI startups, offering tools to route, trace, evaluate, and debug LLM requests with minimal code.
Laminar is an open-source platform for tracing, evaluating, and analyzing AI agents, helping developers build reliable AI applications.
AI engineering workbench with prompt management, evaluations, and LLM observability for team collaboration and improved prompt quality.
AI Delivery Engine for continuous evaluation, guardrails, and monitoring of ML, GenAI, and Agentic AI systems, ensuring reliable performance and scaling.
Data observability platform that helps data teams know when things break, what went wrong, and how to fix it, now by Datadog.
Open Source LLM Engineering Platform for traces, evals, prompt management, and metrics to debug and improve LLM applications.
New Relic is an all-in-one observability platform for engineers to monitor, debug, and improve their entire stack.
Collaborative AI development platform to build, test, and monitor AI features, enabling teams to ship AI to production 10x faster.
Grafana is an open and composable observability platform for data visualization, monitoring, and analysis across various data sources.
End-to-end GenAI evaluation and observability platform to ship AI applications with quality, speed, and reliability.
LLM observability and evaluation platform for AI applications, from development to production, offering unified observability and agent evaluation.
LangChain provides tools and frameworks for building, testing, and deploying AI agents, focusing on observability and durable performance.
Application performance monitoring and error tracking software for developers and software teams.
Experiment tracker purpose-built for foundation models, enabling monitoring, debugging, and visualization of model internals at scale.
AI Gateway & LLM Observability platform for routing, debugging, and analyzing AI applications, empowering developers to build reliable AI solutions.
Fiddler AI offers an AI observability platform for monitoring, analyzing, and protecting AI agents, LLMs, and ML models.
Confident AI is an LLM evaluation platform with best-in-class metrics and guardrails to test, benchmark, safeguard, and improve LLM application performance.
LangWatch is an AI agent testing, LLM evaluation, and LLM observability platform for building better AI agents with confidence.
Monitor your AI app the right way. Get alerts about hidden issues in your AI products with Raindrop's Sentry-like monitoring platform.