Learn/Role Architecture
Role Architecture

Role-based capability.
One architecture. Built per role.

The Predicai architecture is universal. The application is built per role. What changes is what “Gold” looks like inside each function — and that is already built in.

L1 Define
L2 Classify
L3 Build
L4 Govern
L5 Scale
The principle

One architecture.
Built per role.

Every role operates inside the same five-level architecture. The progression, the standard, and the review model are fixed.

What changes is the work it’s applied to. A Chief of Staff’s Gold standard at Level 3 looks different from a Sales rep’s Gold standard. That difference is exactly what the Role Pack defines. A Role Pack is the operational expression of the system for one function: the pressure scenario, Gold examples, failure modes, artifact context, and judgment triggers. That is not surface customization — it is the application layer already built in.

This is how a single system scales across an enterprise without breaking — and without requiring a different curriculum for every function.

Research mapping real AI usage to more than 19,000 standardized job tasks shows that AI lands unevenly at the task level, not uniformly across entire jobs. That is why role-specific application of a universal architecture matters more than generic training. Anthropic Economic Index, 2025 · Microsoft Research, 2025

L1
Define Work
Intent Protocol — Definer role
Task contextFailure modes
L2
Classify Value
Time Ledger — Evaluator role
Work categoriesROI math
L3
Build Systems
Workflow Design — Builder role
Workflow shapeOutputs
L4
Apply Judgment
HITL System — Governor role
TriggersRollback
L5
Scale Doctrine
Doctrine System — Architect role
Doctrine testTransfer rules

Each role follows the same architecture.

The structure never changes. That’s what makes it scalable.

Same standard across every role
Clear · Specific · Testable · Useful · Safe · Transferable
That’s what makes role-to-role capability visible — and enterprise-comparable.
Stays fixed across all roles
Five-level progression
Role identity per level
The Predicai Standard
Gold / Good / Bad framework
Agent + human review model
Capstone System Dossier
Changes per role
Pressure scenario
Gold examples at each level
Role-specific failure modes
Artifact context and names
Business case ROI categories
Judgment triggers and rollback
What this produces
Consistent standard across functions
Role-relevant proof of work
Comparable capability signals
Scalable without curriculum redesign
Enterprise-deployable by role
Leadership-visible capability signals by function
Cross-role comparison using the same universal standard
Research grounding

The Automate / Augment / Human-only classification at Level 2 maps directly to how researchers track real AI impact. Published analysis of millions of AI interactions mapped to over 19,000 standardized job tasks confirms that AI impact operates at the task level, not the job level — which is exactly why Role Packs define what this classification looks like inside specific functions, not just in the abstract. Anthropic Economic Index, 2025 · Microsoft Research, 2025

Five functions. One system.

Each role pack defines what Gold looks like across all five levels. Select a role to see the detail.

Strategy & Decision
Chief of Staff
Cross-functional synthesis, executive briefing, decision clarity under high ambiguity and high consequence.
Briefing workflow Decision governance Exec doctrine
Revenue Generation
Sales / GTM
Call-to-close workflow, CRM integrity, follow-up consistency, pipeline accuracy, and forecast governance.
Call workflow Pipeline governance Sales doctrine
Execution Layer
Operations / Marketing
Campaign execution, message alignment, asset coordination across channels. Where AI breaks most often inside companies.
Campaign workflow Alignment governance Marketing doctrine
System Alignment
RevOps
Pipeline truth, funnel visibility, CRM integrity. Connects sales, marketing, and finance into one defensible operating view.
Pipeline workflow Forecast governance RevOps doctrine
People & Risk
HR / People Ops
Hiring, onboarding, performance evaluation. Where subjectivity is highest, risk is highest, and inconsistency is most expensive.
Hiring workflow Decision governance HR doctrine
More coming
Finance • Product • Legal
Role packs for Finance / FP&A, Product, Legal / Compliance, Customer Success, and Clinical Ops are in development.
In pipeline

Level 2 — Every roleBefore any workflow is designed, each role classifies its recurring work into Automate, Augment, and Human-only categories. The classification logic stays the same, but the task mix looks different for a Chief of Staff than for a Sales rep. The framework is the same.

Strategy & Decision
Chief of Staff
Monday Morning Panic
8:10 AM. CEO needs a briefing by 10:00. Slack shows a three-way conflict: Product says launch is on track. Legal says rollout is blocked. Sales is already messaging customers.

“Give me the real picture in 20 minutes.”
What strong operators do
Separate facts from interpretations before synthesizing anything
Surface conflict rather than smooth it — clarity beats comfort
Define the decision, not just the summary
Assign uncertainty explicitly — known vs. unknown vs. contested
✓ Gold Standard
Gold intent protocol
Task: Synthesize conflicting inputs into a decision-ready briefing
Context: Product/Legal misalignment, external messaging risk from Sales
Constraints: No speculation as fact. Decision-focused. Conflicts surfaced, not resolved by assumption
Success: CEO can act immediately with stated recommendation and confidence level
Role failure modes at L1
Summarizing instead of deciding — data without a recommended path
Smoothing conflict instead of surfacing it to the executive
Treating AI synthesis as fact when sources conflict
Missing the actual decision the briefing needs to enable
✓ Gold Standard
Reader / Thinker / Writer / Review
Reader: Extract facts from each source independently. Flag contradictions without resolving them
Thinker: Resolve factual conflicts using verifiable data. Identify the actual decision
Writer: Decision-first briefing. Recommendation before context. Known / unknown / contested clearly labeled
Review: Chief of Staff confirms no speculation entered as fact before delivery
Workflow failure modes
Merging sources too early before conflicts are identified
Missing contradictions because the AI found a plausible synthesis
Briefing reads as update, not recommendation
Delivering without a human review gate
Gold triggers
Conflicting inputs from two or more functions on the same issue
Unclear ownership of the decision
Deadline under 4 hours for a board-level or public-facing output
Any input marked legally sensitive
Gold monitoring
Decision reversal rate
Missing alignment — teams acting on different versions of truth
Escalation frequency back to CEO
Gold rollback
Immediate clarification loop with source functions before delivery
Escalation to General Counsel if legal and operational inputs conflict
Briefing held until human-verified resolution exists
Judgment failure modes
Delivering a briefing with unresolved conflict buried in the synthesis
Trusting AI conflict resolution on high-stakes decisions
What must be transferable
Briefing structure and quality standard
Conflict handling protocol
Escalation decision rules
Decision memo format
Gold doctrine includes
Briefing format: recommendation first, evidence second, unknowns third
Conflict handling: surface and attribute, never resolve by assumption
Escalation criteria with specific triggers
Doctrine test

“Can a new Chief of Staff produce a decision-ready executive briefing on day one without asking how?”

Capstone ROI
Faster executive decision cycles
Reduced decision reversals
Improved cross-functional alignment
Onboarding acceleration
L1
Briefing intent protocol
Real executive decision under ambiguity
L2
Workload classification audit
One week classified: automate / augment / human-only
L3
Executive briefing workflow
Reader / Thinker / Writer system for cross-functional synthesis
L4
Decision governance system
HITL triggers for high-stakes decision support
L5
Executive operating doctrine
Transferable standard for briefings, conflict handling, escalation
Revenue Generation
Sales / GTM
Monday Morning Panic
8:45 AM. You just finished a strong call. You need a follow-up email, CRM notes, and an internal summary — by noon. Budget pressure, legal review, and a competitor were all mentioned. The CEO wants to know if this deal is real.
What strong operators do
Define before generating — task, constraints, and context before any draft
Separate thinking from writing — identify risks before output
Review for truth, not tone — check for invented details before sending
Maintain decision-maker clarity — who owns the next step
✓ Gold Standard
Gold intent protocol
Task: Follow-up email to confirm next steps and reinforce value without introducing new claims
Context: Hospital evaluating imaging. Budget cycle pending, clinical lead engaged, no commitment made
Constraints: No new claims. Professional tone. Under 200 words. Decision-focused
Success: Clear next step scheduled. No ambiguity about ownership. No invented detail
Role failure modes at L1
Sounding promotional instead of precise
Missing decision-maker signals buried in the call
Writing before achieving decision clarity
Skipping constraints and hoping the model infers them
✓ Gold Standard
Reader / Thinker / Writer / Review
Reader: Extract facts from call notes. Identify DMs. Flag objections and open questions
Thinker: Assess deal risk and next step validity. Identify what must not appear in the email
Writer: Follow-up email + CRM update + internal deal summary from one set of inputs
Review: Human confirms no invented detail before anything is sent or logged
Workflow failure modes
Collapsing Reader / Thinker / Writer into one prompt
Skipping the CRM structure step
Sending before human review
Writing from memory instead of structured inputs
Gold triggers
Deal above $50K with no confirmed decision-maker
Forecast change greater than 10% in a single week
Legal or procurement flag raised on the call
Next step missing after three touches
Gold monitoring
AI override rate — how often humans correct output
Stalled deals beyond 14 days with no activity
Missing next-step field in CRM
Gold rollback
RevOps corrects pipeline within 2 business hours of flag
Repeat errors trigger workflow adjustment
Escalation to manager if deal risk unresolved after 24 hours
Gold doctrine includes
Defines what a good follow-up contains — not as suggestion, but as checklist
Risk levels: Low = committed next step, Medium = open question, High = no DM or stalled
Required CRM fields before a deal advances stages
Human review step with named owner and backup
Doctrine test

“Can a new rep produce a Gold-standard follow-up and CRM update on their first day, without asking anyone?”

Capstone ROI
Hours recovered from CRM admin and follow-up drafting
Improved pipeline accuracy
Faster rep onboarding through documented standard
L1
Follow-up intent protocol
Post-call email + CRM brief for a real deal
L2
Sales activity audit
One week of sales activities classified
L3
Call-to-close workflow
Reader / Thinker / Writer system for call processing
L4
Pipeline risk governance system
HITL triggers, monitoring signals, rollback protocol
L5
Sales operating doctrine
Transferable standard for follow-up, CRM, and pipeline integrity
Execution Layer
Operations / Marketing
Monday Morning Panic
Monday, 9:05 AM. Campaign launches Wednesday. Draft messaging is incomplete. Design is not aligned to copy. Landing page CTA is unclear. Sales is asking what to say. Leadership is asking if you are ready.

You need a campaign narrative, execution plan, and cross-team alignment. Today.
What strong operators do
Define the message before producing any assets
Separate planning from production — message clarity before execution
Sequence work instead of multitasking it
Validate all channel outputs against a single source of truth
✓ Gold Standard
Gold intent protocol
Task: Campaign narrative + execution brief for a product launch
Context: Launch in 48 hours, mixed messaging across teams, unclear CTA
Constraints: All channel outputs must align to one message. No new claims. Timeline fixed
Success: Consistent message across email, landing page, and sales enablement with a clear CTA in every output
Role failure modes at L1
Generating assets before the core message is defined
Optimizing copy before channel alignment exists
Confusing activity volume with progress
Content that looks complete but has no clear CTA
✓ Gold Standard
Reader / Thinker / Writer / Review
Reader: Gather all inputs. Identify contradictions across existing messaging before producing anything
Thinker: Define core message, audience, CTA, and risks. Resolve channel conflicts at strategy level
Writer: Campaign brief → landing page → email → sales summary — all from the single brief
Review: CTA consistency confirmed by a human across all channel outputs before launch
Workflow failure modes
Writing assets before message clarity exists
Each channel briefed separately — no single source of truth
Inconsistent voice or CTA across outputs
No cross-channel alignment check before launch
Gold triggers
Messaging differs across any two channels
CTA is unclear or absent in any output
Stakeholder disagreement on positioning
Launch less than 48 hours away with unresolved alignment
Gold rollback
Pause launch if messaging is misaligned at the 24-hour mark
Re-align at the brief level — not by editing individual assets
Reissue all channel assets from the corrected brief
Owner: marketing lead. Timeline: same-day correction
Gold doctrine includes
Campaign structure: brief first, assets second, review third
Message hierarchy: one source of truth, all channels derived from it
CTA standards: what makes a CTA acceptable in each channel
Review checkpoints with named owner and what they check
Doctrine test

“Can someone new run a campaign from brief to launch without asking a single question about process?”

Capstone ROI example

8 hrs/week saved on coordination + 6 hrs/week saved on rewriting = 14 hrs/week recovered × $60/hr × 52 weeks = $43,680/year per operator.

L1
Campaign intent brief
Full intent protocol for a real product launch
L2
Marketing time ledger
One week classified: automate / augment / human-only
L3
Campaign workflow system
Brief-to-launch with Reader / Thinker / Writer separation
L4
Campaign governance system
HITL triggers, alignment monitoring, rollback protocol
L5
Marketing doctrine
Transferable standard for how campaigns are built and reviewed
System Alignment
RevOps
Monday Morning Panic
Monday, 9:15 AM. Leadership asks three questions: Why did pipeline drop? Are we going to hit the number? Which deals are real?

You have CRM inconsistencies, conflicting reports from Sales and Finance, and an AI-generated pipeline summary that has not been validated. You need a clear pipeline view, a risk analysis, and an answer leadership can act on. Now.
What strong operators do
Standardize data before analyzing — bad inputs produce confident-sounding lies
Separate signal from noise — not all pipeline movement is meaningful
Define what real pipeline means before reporting on it
Expose risk early — surprises are more expensive than bad news
✓ Gold Standard
Gold intent protocol
Task: Pipeline health report with risk segmentation for leadership
Context: CRM data inconsistent, AI summary unverified, forecast decision needed within 2 hours
Constraints: Defined pipeline criteria only. No guessing missing fields. Leadership-ready
Success: Clean pipeline by risk tier, risk visible before forecast call, actionable insight with named owner
Role failure modes at L1
Trusting CRM data without validating against defined criteria
Using inconsistent pipeline definitions across the report
Accepting AI-generated summaries as analysis
Reporting confidence without surfacing data quality issues
✓ Gold Standard
Reader / Thinker / Writer / Review
Reader: Ingest CRM, marketing, and finance data independently. Flag inconsistencies before analysis
Thinker: Classify deals by quality, risk, stage accuracy against defined criteria
Writer: Pipeline report by risk tier + risk summary + forecast input from validated pipeline only
Review: RevOps lead validates key deals before report is delivered
Workflow failure modes
Inconsistent stage definitions across deal types
Pipeline risk hidden in aggregate numbers
Delivering forecast input before validation is complete
Gold triggers
Deal stalled beyond 14 days with no documented activity
Required CRM fields missing on deals above $25K
Pipeline change greater than 15% week over week
Finance and CRM forecast inputs diverge by more than 10%
Gold rollback
Remove invalid deals from forecast pipeline immediately
Correct stage classifications with documented rationale
Reissue forecast with reconciliation note if changes exceed 10%
What must be transferable
Pipeline review structure and escalation criteria
Data validation rules and integrity standards
Forecast confidence framework
Reporting cadence and format
Gold doctrine includes
Pipeline review: data first, narrative second, risk flagged
Validation: no forecast moves without documented rationale
Integrity: every input traceable to a defined source
Doctrine test

“Can a new RevOps analyst run the full pipeline review and produce a defensible forecast without asking how?”

Capstone ROI
Forecast accuracy and consistency
Reduced pipeline hygiene burden on reps
Faster revenue reporting cycles
Scalable reporting without analyst dependency
L1
Pipeline intent brief
Intent protocol for a real pipeline health report under time pressure
L2
RevOps time ledger
One week classified across reporting, hygiene, analysis, validation
L3
Pipeline workflow system
Ingestion-to-report with Reader / Thinker / Writer separation
L4
Pipeline governance system
HITL triggers, monitoring signals, rollback for forecast integrity
L5
RevOps doctrine
Transferable standard for pipeline definitions, reporting, validation
People & Risk
HR / People Ops
Monday Morning Panic
8:45 AM. A hiring decision feels rushed. Interview feedback is inconsistent across four interviewers. The role definition was vague. Leadership is asking: “Is this person a fit?”

At the same time, two managers used AI to write performance feedback and there is no standard for what acceptable AI-assisted evaluation looks like. You need structured hiring signal, consistent evaluation, and defensible decisions. Now.
What strong operators do
Define the role clearly before evaluating any candidate
Standardize evaluation criteria before interviews begin
Separate signal from bias — structure reduces both
Document decision logic so it is auditable
Ensure consistency across managers
✓ Gold Standard
Gold intent protocol
Task: Define a role and the evaluation criteria applied consistently across all interviewers
Context: New hire needed quickly, requirements unclear, three interviewers with different opinions
Constraints: No vague criteria. Must align to documented business need. Usable without HR present
Success: Role clearly defined, criteria consistent, hiring decision defensible if challenged
Role failure modes at L1
Copying a generic job description and calling it a role definition
Using AI to generate criteria without aligning to business requirements
Allowing subjective fit language to replace defined criteria
✓ Gold Standard
Reader / Thinker / Writer / Review
Reader: Gather role requirements. Collect and keep interviewer notes separate before synthesis
Thinker: Evaluate against defined criteria. Flag inconsistencies across interviewers before conclusion
Writer: Structured evaluation + hiring recommendation with documented rationale + onboarding plan
Review: HR or hiring manager makes final decision. Decision rationale documented before offer
Workflow failure modes
Inconsistent interview questions across interviewers
Unstructured feedback that cannot be compared
Decision made without documented rationale
Bias introduced through undefined fit criteria
Gold triggers
Conflicting interviewer feedback with no reconciliation
High-impact hire (director level and above)
Any performance review with potential termination downstream
AI-assisted evaluation with no human review layer
Gold rollback
Pause decision and return to criteria alignment if feedback is irreconcilable
Re-run structured interviews if the original process was not criteria-based
Escalate to legal review before any decision with discrimination exposure
What must be transferable
Role definition and evaluation criteria structure
Interview scoring and decision framework
Candidate assessment standards
Hiring decision documentation format
Gold doctrine includes
Hiring: role intent defined before sourcing begins
Evaluation: criteria fixed before interviews start
Decision: documented rationale required for every outcome
Doctrine test

“Can a new HR manager run a complete hiring cycle, from role definition to decision, without asking how?”

Capstone ROI
Consistent hiring quality across managers
Reduced time-to-hire through structured process
Defensible evaluation and onboarding across managers
L1
Role intent brief
Structured role definition and evaluation criteria for a real open position
L2
HR time ledger
One week classified: automate / augment / human-only in HR work
L3
Hiring workflow system
Reader / Thinker / Writer system from role definition to hiring decision
L4
Hiring governance system
HITL triggers, monitoring signals, rollback for hiring decisions
L5
HR doctrine
Transferable standard for hiring, evaluation, and onboarding across managers
Take the Capability Diagnostic → or See how this maps into Academy →

Add any role. No rebuild required.

The architecture stays fixed. Adding a role means building its operational expression — the scenario, Gold examples, failure modes, artifact context, and judgment triggers for that function. Not rebuilding the system.

If the architecture changes per role, it is not a system — it is a curriculum.

Finance, Legal, Product, Customer Success, Clinical Ops — any role can be mapped to the system by following the Role Pack Template. The evaluation criteria, review model, and progression architecture are already built. What gets added is the role context: the pressure scenario, Gold examples, failure modes, artifact context, and judgment triggers. All role packs are reviewed against the Predicai Standard before being added to the system — ensuring the platform stays consistent as it scales.

ActiveChief of Staff
ActiveSales / GTM
ActiveOperations / Marketing
ActiveRevOps
ActiveHR / People Ops
PipelineFinance / FP&A
PipelineProduct
PipelineLegal / Compliance
PipelineCustomer Success
PipelineClinical / Healthcare Ops
Role Pack Template — Structure
Operating Context
Pressure Scenario
What Strong Operators Do
L1 Gold Example
L2 Value Classification
L3 Workflow System
L4 HITL Governance
L5 Doctrine + Test
Artifact Context
Capstone Business Case
The operating system

Not training. Not tools.
One architecture. Built per role. Run as a system.

The same capability spine runs across every function. What changes is the role application, not the architecture. The same standard governs every artifact. The same review model ensures every output holds up. Apply it to one team — then scale it across the organization.