API agent
souma-rtu-1.0.0Use the stable RtU agent identifier when you need repeatable right-to-use decisions for production output review.
Right-to-use agent for AI output compliance, source evidence, provenance, and policy risk before work reaches customers.
Stable right-to-use decisions for source rights, license posture, provenance, and policy context before AI output reaches users.
Read moreIntroduced the evaluation harness for evidence completeness, false-clearance resistance, fallback discipline, and batch reliability.
Read moreDefined Souma as a provider-neutral control layer for output rights, provenance, and audit evidence across model providers.
Read moreRtU can run inline for fast pre-ship control, stream long-running evidence work, and preserve a review record for audit workflows.
Developers can submit generated output, source URLs, usage purpose, and workflow metadata to Souma, then retrieve the verdict, confidence, reasoning, policy source URLs, timestamps, and review state from the result endpoint.
Production pricing is workflow-based and depends on validation volume, evidence retention, review seats, streaming requirements, and deployment needs.
POST /validateand usesouma-rtu-1.0.0souma-rtu-1.0.0Use the stable RtU agent identifier when you need repeatable right-to-use decisions for production output review.
POST /validateSend generated output, source URLs, usage purpose, and workflow metadata for right-to-use validation.
GET /validate/streamStream validation updates for massive URL sets and evidence chains without blocking the UI.
GET /validate/result/:idRetrieve verdict, confidence, reasoning, policy source URLs, timestamps, and review state.
RtU is designed for the moment after a model generates content and before that content becomes a customer-facing artifact. It works best when right-to-use confidence matters more than raw speed.
Approve, hold, or escalate generated content before release. RtU is strongest when product, marketing, support, or research output is about to become customer-facing.
Map source URLs, asset references, usage constraints, attribution requirements, expiration windows, and policy exceptions into a single review trail.
Route uncertain output to legal, risk, compliance, or product owners with the evidence they need to make a fast decision.
Apply the same output controls across OpenAI, Anthropic, Google, Mistral, open-source, and custom deployments without rebuilding review logic.
Restricted overlap, source-adjacent language, and potentially derivative output.
Commercial rights, attribution, usage scope, revocation, and time-bound conditions.
Prompt, model, source URLs, reviewer action, decision time, and chain of custody.
Organization-specific approval rules, escalation owners, and prohibited use cases.
RtU is evaluated around the operating qualities regulated teams need: evidence quality, conservative decisions, stable review states, and reliable long-running checks.
| Evaluation | RtU 1.0.0 | What is measured |
|---|---|---|
| Evidence completeness | Covered | Source, license, policy, purpose |
| False-clearance resistance | Strict | Escalates restricted or missing rights |
| Fallback discipline | Required | No placeholder confidence or silent approval |
| Batch reliability | Streaming | Async evidence path for massive URL sets |
| Audit packet | Versioned | Decision tied to model and evidence window |
Public benchmark numbers are held until statistical coverage, customer validation, and external review are complete.
Extensive testing and evaluation are focused on making uncertainty visible. RtU is not a decorative score. It is a control point for teams that need to know why an output was approved, flagged, or held.
Pre-ship review for AI-generated output in regulated product, legal, compliance, research, marketing, and support workflows.
RtU organizes evidence and routes decisions. It does not replace counsel, reviewer judgment, or customer-specific policy ownership.
RtU can return UNCLEAR when source terms are missing, blocked, conflicting, newly changed, or outside the available evidence window.
Ambiguous evidence is surfaced as uncertainty and can be routed to human review instead of being silently treated as approved.
Missing evidence stays visible. RtU returns explicit uncertainty instead of manufacturing placeholder confidence.
Each decision is tied to the RtU agent version and the evidence available at the time of review.
High-risk or ambiguous output can move into review with the context a human owner needs.
Public customer references are not yet published. Until then, Souma keeps this section explicit about the workflows being validated by product, legal, and compliance teams.
Keep compliance in the same path as generation, release review, and customer-facing delivery.
Review the evidence behind an output without reconstructing prompts, sources, and policies after the fact.
Export audit packets that connect decisions to policy, source evidence, model version, and reviewer action.
Use RtU when generated output may reach customers, clients, regulators, partners, or public channels and your team needs a defensible right-to-use decision before release.
RtU separates fast inline gating from long-running evidence retrieval. Teams can stream validation updates, avoid blocking the UI, and inspect completed evidence later.
Pricing is workflow-based and depends on validation volume, evidence retention, review seats, streaming requirements, and deployment needs. Contact Souma for production access.
RtU is provider-neutral. It evaluates generated output and evidence after the model call, so it can sit behind OpenAI, Anthropic, Google, Mistral, open-source, and custom models.