Souma

Souma RtU 1.0.0

Right-to-use agent for AI output compliance, source evidence, provenance, and policy risk before work reaches customers.

Announcements

Availability and API access

RtU can run inline for fast pre-ship control, stream long-running evidence work, and preserve a review record for audit workflows.

Developers can submit generated output, source URLs, usage purpose, and workflow metadata to Souma, then retrieve the verdict, confidence, reasoning, policy source URLs, timestamps, and review state from the result endpoint.

Production pricing is workflow-based and depends on validation volume, evidence retention, review seats, streaming requirements, and deployment needs.

Agent ID
souma-rtu-1.0.0
Release
Apr 19, 2026
Input evidence
Output, URLs, purpose, policy context
Decision states
PERMITTED, NOT_PERMITTED, UNCLEAR
Evidence packet
Sources, licenses, reasoning, policy references
Runtime path
Inline API, stream, dashboard review
Start withPOST /validateand usesouma-rtu-1.0.0

API agent

souma-rtu-1.0.0Use the stable RtU agent identifier when you need repeatable right-to-use decisions for production output review.

Submit checks

POST /validateSend generated output, source URLs, usage purpose, and workflow metadata for right-to-use validation.

Long-running evidence

GET /validate/streamStream validation updates for massive URL sets and evidence chains without blocking the UI.

Decision record

GET /validate/result/:idRetrieve verdict, confidence, reasoning, policy source URLs, timestamps, and review state.

Use cases

RtU is designed for the moment after a model generates content and before that content becomes a customer-facing artifact. It works best when right-to-use confidence matters more than raw speed.

Right-to-use gating

Approve, hold, or escalate generated content before release. RtU is strongest when product, marketing, support, or research output is about to become customer-facing.

License and source review

Map source URLs, asset references, usage constraints, attribution requirements, expiration windows, and policy exceptions into a single review trail.

Enterprise release review

Route uncertain output to legal, risk, compliance, or product owners with the evidence they need to make a fast decision.

Provider-neutral governance

Apply the same output controls across OpenAI, Anthropic, Google, Mistral, open-source, and custom deployments without rebuilding review logic.

Agent signals

Similarity

Restricted overlap, source-adjacent language, and potentially derivative output.

License posture

Commercial rights, attribution, usage scope, revocation, and time-bound conditions.

Provenance

Prompt, model, source URLs, reviewer action, decision time, and chain of custody.

Policy fit

Organization-specific approval rules, escalation owners, and prohibited use cases.

Benchmarks

RtU is evaluated around the operating qualities regulated teams need: evidence quality, conservative decisions, stable review states, and reliable long-running checks.

EvaluationRtU 1.0.0What is measured
Evidence completenessCoveredSource, license, policy, purpose
False-clearance resistanceStrictEscalates restricted or missing rights
Fallback disciplineRequiredNo placeholder confidence or silent approval
Batch reliabilityStreamingAsync evidence path for massive URL sets
Audit packetVersionedDecision tied to model and evidence window

Public benchmark numbers are held until statistical coverage, customer validation, and external review are complete.

Trust and safety

Extensive testing and evaluation are focused on making uncertainty visible. RtU is not a decorative score. It is a control point for teams that need to know why an output was approved, flagged, or held.

Intended use

Pre-ship review for AI-generated output in regulated product, legal, compliance, research, marketing, and support workflows.

Not a legal opinion

RtU organizes evidence and routes decisions. It does not replace counsel, reviewer judgment, or customer-specific policy ownership.

Known limitations

RtU can return UNCLEAR when source terms are missing, blocked, conflicting, newly changed, or outside the available evidence window.

Safety behavior

Ambiguous evidence is surfaced as uncertainty and can be routed to human review instead of being silently treated as approved.

No silent fallbacks

Missing evidence stays visible. RtU returns explicit uncertainty instead of manufacturing placeholder confidence.

Versioned decisions

Each decision is tied to the RtU agent version and the evidence available at the time of review.

Human escalation

High-risk or ambiguous output can move into review with the context a human owner needs.

Hear from regulated teams

Public customer references are not yet published. Until then, Souma keeps this section explicit about the workflows being validated by product, legal, and compliance teams.

01

AI product teams

Keep compliance in the same path as generation, release review, and customer-facing delivery.

02

Legal and risk teams

Review the evidence behind an output without reconstructing prompts, sources, and policies after the fact.

03

Compliance teams

Export audit packets that connect decisions to policy, source evidence, model version, and reviewer action.

Frequently asked questions

When should I use RtU Agent 1.0.0?

Use RtU when generated output may reach customers, clients, regulators, partners, or public channels and your team needs a defensible right-to-use decision before release.

How does RtU handle massive URL sets?

RtU separates fast inline gating from long-running evidence retrieval. Teams can stream validation updates, avoid blocking the UI, and inspect completed evidence later.

How much does RtU cost?

Pricing is workflow-based and depends on validation volume, evidence retention, review seats, streaming requirements, and deployment needs. Contact Souma for production access.

Which model providers does RtU support?

RtU is provider-neutral. It evaluates generated output and evidence after the model call, so it can sit behind OpenAI, Anthropic, Google, Mistral, open-source, and custom models.

Bring RtU into your AI workflow

Try SoumaRead announcement