Issue 004 Predicai Signal™ Free

Agents With Receipts

AI is moving from generating outputs to taking actions. When an agent books a meeting, sends a message, or modifies a file, the audit trail is not a side effect — it is the accountability layer.

The agent sent the email.
You didn’t review it first.
The client replied.
Now you’re explaining why the terms in that email don’t match what you actually agreed to.

The agent was efficient.
The receipt was missing.

AI is no longer just talking. It’s doing.

For the first two years of the mainstream AI wave, the output of every interaction was text. You asked, the model answered, you decided what to do with the answer. The human was always the last step before anything happened in the real world.

That changed. AI agents can now book calendar events, send emails, update CRM records, create and modify files, run code, browse the web, and call APIs. The model is no longer the author — it is the actor.

This is a qualitative shift in what governance means. When AI produces text, a bad output costs you the time it takes to notice and discard it. When AI takes action, a bad output costs you the time, money, and relationships involved in reversing something that already happened.

When AI produces text, a mistake is a draft you discard.
When AI takes action, a mistake is something you have to undo.
Failure mode
Action without a receipt
The agent takes an action — sends the email, updates the record, books the slot, modifies the file. There is no log of what it did or why. When something goes wrong — and eventually it will — you have no way to understand what happened, no way to identify where the logic failed, and no way to prevent it from happening again. You are not operating inside the system. You are hoping it works.
The governance gap when AI acts
AI generates text
1AI produces output
2Human reviews output
3Human decides to act
4Human takes action
Human is the last step before real-world impact. Mistakes are caught before they matter.
AI takes action
1AI receives instruction
AI acts directly on the world
?Human learns what happened
?Human tries to reverse it
Human is removed from the loop. Mistakes are discovered after they matter.

The receipt is as important as the action.

Organizations that govern AI well in the agent era will build one thing into every autonomous workflow: a human-readable log of what the AI did and why. Not just for compliance — for learning, debugging, and accountability.

The receipt tells you when an agent exceeded its brief. It tells you what assumptions it made. It tells you when to intervene before the next iteration. Without a receipt, you cannot learn from what went wrong. You cannot prevent it from happening again.

This is not a technical challenge. It is a governance choice. Every agent workflow can be designed to leave a record. Most are not — because the people building them optimized for speed over accountability.

If your AI workflow doesn’t leave a receipt,
you are not operating — you are hoping.
What a receipt captures
What it didThe specific action taken, in plain language
Why it did itThe instruction or logic that triggered the action
What it assumedThe context it was operating with at the time
What it changedBefore and after state of anything it modified
Human checkpointWho reviewed it and when — or that no one did

Build the receipt before you need it.

The minimum viable receipt is a plain-text log: what did the agent do, when, and based on what instruction? This does not require sophisticated tooling. A simple append to a text file after each action is enough to start.

The more important discipline is the human checkpoint. For any agent workflow that acts on something irreversible — sends an external communication, modifies a financial record, takes a customer-facing action — there should be a human review step before the action fires, not after.

You will need the receipt before you think you need it. Design it into the workflow now, while it is cheap to do so.

Three receipt design principles
01
Log the action in plain language. Not just an API call or a system event — a human-readable description of what happened and why, written so a non-technical reviewer can understand it.
02
Flag the irreversible ones. Not all agent actions carry equal risk. Build different checkpoints for reversible actions (easy to undo) vs irreversible ones (external emails, financial transactions, public-facing changes).
03
Review the receipts on a schedule. Logs are only useful if someone reads them. Set a weekly review of what your agents did — this is where you find the gaps before they become incidents.
This week’s move

For any AI workflow you have that takes action — sends emails, updates records, creates files — add one step today: a human-readable log of what the AI did and why. Even a simple text file. Build the receipt before you need it. You will need it.

Signal Tracks the shift Every issue tracks a shift in how AI-assisted work actually operates. This one is about the moment AI moved from advising to acting.
Academy Builds the system Level 3 of Academy is the Agentic Blueprint — building AI chains that take real-world action. The Human-in-the-Loop Protocol is a required deliverable at Level 3. Explore Academy →
The Standard Sets the bar Gate 6 of the Standard is Operational Safety: is there a human-in-the-loop checkpoint before the output acts on anything real? For agents, this gate is existential. Read the Standard →
Found this useful?
Forward it to someone who needs it.
Signal is free. Always will be.
Continue the thread

The next shift.
Issue 005.

Issue 005 examines why the AI stack is reorganizing around composable systems rather than monolithic platforms — and why the decisions you make about AI tooling in the next 12 months will determine whether you own your workflow or rent it.

🔒 Free always. No spam. Unsubscribe any time.