Getting Started
This guide is the fastest path for a technical evaluator who wants to understand Mandaitor in one sitting, create a first mandate, and verify a real AI action against it. If you are reviewing Mandaitor for a pilot, this page should give you enough context to move from first contact to a meaningful sandbox test without digging through the rest of the documentation first.
What you will achieve
By the end of this guide, you will have completed the core evaluator loop.
| Step | Outcome |
|---|---|
| 1 | Understand the evaluation model and the minimum information you need |
| 2 | Install the SDK and initialize a client |
| 3 | Create your first delegation mandate |
| 4 | Verify a real agent action before execution |
| 5 | Understand which artifacts to review with your governance or product team |
Evaluation model in one paragraph
Mandaitor is the runtime trust layer for delegated AI actions. Instead of assuming that an AI agent is allowed to act because it has application access, you verify whether a specific delegator granted a specific delegate the right to perform a specific action on a specific resource, under explicit constraints. The verification result becomes a signed, reviewable proof artifact you can preserve for audit, incident handling, or customer trust workflows.
1. Gather the minimum inputs
Before you start, make sure you know the answers to the following questions.
| Input | Example | Why it matters |
|---|---|---|
| Tenant context | tnt_demo_healthcare | Establishes the authority boundary for mandates and verification |
| Delegator | user:jane.doe@example.com | Represents the person or system granting authority |
| Delegate | agent:validation-v3 | Identifies the AI agent or service acting under delegation |
| Action | construction.validation.approve | Defines what the delegate is allowed to do |
| Resource | project:proj_12345/* | Defines where the authority applies |
| Constraints | duration, purpose, rate limits, human review | Converts broad permission into governed runtime policy |
If you do not yet know your production identifiers, use representative placeholders. The purpose of the first evaluation is not perfect modeling; it is learning how mandates and verification behave in your workflow.
2. Get your API key
You need an API key to authenticate your requests. You can request access via the Mandaitor trust site and use the issued credentials for sandbox or pilot evaluation.
Keep your API key secure and never expose it in a public frontend application.
3. Grab the evaluator kit
If you want the fastest self-serve path, keep the following three assets open in parallel while you test.
| Asset | What it gives you | Link |
|---|---|---|
| Interactive API Reference | Browse every endpoint, schema, and example response in the browser | /api-reference |
| OpenAPI specification | Import the full machine-readable contract into your own tooling | /downloads/mandaitor-openapi.yaml |
| Postman collection | Run the first evaluation flow without writing a full client first | /downloads/mandaitor-api.postman_collection.json |
This combination is the intended evaluator path for teams that want to validate the Mandaitor API before embedding the SDK into an application codebase.
4. Install the SDK
Install the TypeScript SDK in your project.
npm install @mandaitor/sdk
If you want to evaluate a pre-built action vocabulary for a specific industry, install the matching taxonomy package as well.
npm install @mandaitor/taxonomy-construction
5. Initialize the client
Create a MandaitorClient with your tenant context and credentials.
import { MandaitorClient } from "@mandaitor/sdk";
const client = new MandaitorClient({
apiKey: process.env.MANDAITOR_API_KEY,
tenantId: "tnt_your_tenant_id",
});
At this point, your evaluator environment is ready to create and verify mandates.
6. Create your first mandate
The example below models a construction validation workflow, but the same structure applies to healthcare, finance, or any other governed AI use case.
import type { CreateMandateRequest } from "@mandaitor/sdk";
async function createFirstMandate() {
const request: CreateMandateRequest = {
principal: {
type: "NATURAL_PERSON",
subject_id: "user:jane.doe@example.com",
display_name: "Jane Doe",
},
delegate: {
type: "AGENT",
subject_id: "monco:agent:validation-v3",
display_name: "Monco Validation Agent",
},
scope: {
actions: ["construction.validation.approve"],
resources: ["monco:project:proj_12345/*"],
effect: "ALLOW",
},
constraints: {
time: {
duration: "P30D",
},
rate_limits: {
max_operations: 100,
window_seconds: 3600,
},
context: {
project_phase: "execution",
requires_human_review: true,
},
},
};
const mandate = await client.createMandate(request);
console.log("Mandate created:", mandate.mandate_id);
return mandate;
}
What to review after creation
Once the mandate is issued, verify that the following evaluation questions have clear answers.
| Question | What a good first evaluation looks like |
|---|---|
| Is the delegate specific enough? | The agent is identifiable, not a vague class of tools |
| Is the action narrow enough? | The action reflects a concrete workflow step |
| Is the resource bounded? | The resource scope is limited to a project, patient, or domain slice |
| Are the constraints meaningful? | The mandate can expire, rate-limit, or require human review |
7. Verify a real agent action
Now verify a single action before the AI agent performs it.
async function verifyAgentAction() {
const result = await client.verify({
delegate_subject_id: "monco:agent:validation-v3",
action: "construction.validation.approve",
resource: "monco:project:proj_12345/zone:A/trade:electrical",
context: {
project_phase: "execution",
requires_human_review: true,
},
});
if (result.decision === "ALLOW") {
console.log(`Action is allowed by mandate ${result.mandate_id}`);
console.log("Signed proof artifact:", result.proof);
} else {
console.log(`Action denied. Reason: ${result.reason_codes?.join(", ")}`);
}
return result;
}
const newMandate = await createFirstMandate();
await verifyAgentAction();
The important point is not only the ALLOW or DENY result. It is the fact that the decision remains tied to a specific mandate and can produce evidence your team can review later.
8. What to look for in the verification result
A productive evaluation reviews both the decision and the governance signal.
| Field | Why evaluators care |
|---|---|
decision | Confirms whether the requested action is allowed right now |
mandate_id | Tells you which delegation object authorized the action |
reason_codes | Explains denials or constraint mismatches |
proof | Provides the audit-friendly evidence artifact for storage or review |
If you are evaluating Mandaitor for a pilot, save at least one successful and one denied verification result. That comparison usually makes the value proposition obvious to both engineering and governance stakeholders.
9. Common evaluator variations
Different teams usually start with one of the following tracks.
| Track | Typical first mandate |
|---|---|
| Healthcare | Clinical data reader, discharge-letter assistant, triage copilot |
| Construction | BIM validator, inspection assistant, progress-reporting agent |
| Internal platform | Generic workflow agent acting under employee-issued mandates |
The modeling pattern is identical even when the vocabulary changes.
10. Optional: evaluate the React components
If you are building a React application, you can use @mandaitor/react for a ready-made mandate creation UI. The component below is rendered live from the actual library.
For a deeper component walkthrough, continue to the React Integration Guide.
11. Recommended next steps after this guide
If this first run was successful, the next useful documents are usually the following.
| Next step | Why to read it |
|---|---|
| /api-reference | To inspect the exact request and response schema while you integrate or review security assumptions |
| React Integration Guide | To embed mandate creation and review directly into your application |
| Proof-of-Mandate | To understand the proof artifact and downstream evidence workflows |
| Use-case landing pages on trust.mandaitor.io | To align your pilot framing with industry-specific buyer and compliance language |
If you can successfully create one mandate and verify one real action against it, you already have the core ingredients for a serious Mandaitor pilot evaluation.