Pulse becomes the live operating surface for output review
Pulse now brings outputs processed, confidence posture, audit success, and processing latency into one continuous review feed for regulated AI teams.
One screen for live output posture
Teams reviewing AI outputs need to know whether the system is healthy before they drill into a specific artifact. Pulse now gives that opening answer immediately: how much work is moving through the system, how confident the current review set is, how often audits are clearing, and where processing time is rising.
The goal is not to create another analytics page. The goal is to make the first screen operational for compliance, product, and legal teams that need to understand the day before they start investigating edge cases.
Designed for review operations
We tightened the surface around the metrics teams actually use in review workflows. Hover details now open faster, the visual treatment is aligned with the wider platform shell, and the page reads as part of the same operating environment as Evidence, Alerts, and the Checker.
- Live outputs processed as a direct throughput signal.
- Confidence posture exposed as a review-quality signal instead of an isolated model metric.
- Audit success and processing latency visible in the same frame so teams can judge quality and responsiveness together.
What this changes for teams
Before this release, the top-level dashboard was useful but too detached from the actual compliance workflow. Pulse now works as the opening control surface for teams that need to decide where to investigate, escalate, or intervene.
That matters most in regulated environments, where the question is rarely whether one output passed. The harder question is whether the system posture still supports safe shipping at the volume the team is handling today.
Build Souma into the same path where AI outputs are generated, reviewed, and shipped.