TL;DR: Functional AI is the antidote to both hype and harm: narrow scope, bounded permissions, verifiable outcomes, and lower waste. That same model can help a child communicate - translating weak signals into understood intent - and it can make complex platforms like WordPress safer to deploy and configure with AI. That’s sustainable AI: bounded by purpose.
My favorite use of AI isn’t “write me anything.” It’s: help me understand what a human is trying to express when the human can’t get it across in the usual way.
My daughter, Ida Marie, cannot speak, and she can’t use sign language in a conventional way. She has a small set of signals. One of them is raising her finger - sometimes it feels like 200 times a day - for her it means: “I want to say something.” But most of the time, we still can’t decode what the “something” is.
That experience forces a very different view of AI: not as a replacement for humans, but as a bridge for human limitations - a tool that turns weak signals into understood intent.
“When a person can’t communicate, AI should translate the signal - not harvest the person.”
And that’s the bridge to the rest of this story: the best AI systems will not be the broadest ones. They’ll be the ones with tight scope, careful permissions, and measurable impact - because that’s how you create trust.
Which brings us to WordPress.
WordPress “Agent Skills” is a signal of platform maturity
WordPress introducing “Agent Skills” is meaningful because it’s not just “AI inside WordPress.” It’s WordPress starting to define a contract for AI execution: what actions are permitted, where they can run, and how outcomes can be validated.
That matters because most AI-in-the-dashboard features fail in the same way: they stay in the “advisor” role. They can suggest, but they can’t safely do - and they can’t prove the result. WordPress is now moving toward the pattern that mature systems eventually converge on:
- Abilities describe what can be done in a structured way.
- Skills describe where and how it can be executed safely.
- A sandbox (like Playground) creates a disposable environment to test changes without risking production sites.
This is subtle, but foundational: it’s the difference between AI that drafts opinions and AI that operates inside guardrails. In practice, it means an agent can (eventually) do things like: create a clean test instance, install a plugin set, change configuration, run checks, roll back, and document what happened - all without turning your live site into a lab.
“Bounded AI is a sustainability strategy: fewer watts, fewer mistakes, more trust.”
And that’s why this is bigger than “WordPress gets AI.” It’s WordPress positioning itself as an environment AI can reliably operate within - because reliability requires three things: permissions, isolation, and verification.
Once a platform has a safe execution surface, you can build bounded, functional agents that do one job end-to-end: observe → propose → apply → verify. Not “general AI,” but AI with a job description.
In WordPress, that’s especially workable because so much of the site’s behavior flows through a small set of control points (plugins, hooks, and script loading), which makes ‘bounded change + measurable result’ realistic.
One of the most obvious jobs inside WordPress is also one of the most neglected: making third-party behavior visible and controllable. Because the modern WP site isn’t a single system - it’s a constellation of plugins, tags, embeds, CDNs, and “just one more script” decisions. That sprawl is why configuration drifts, and why humans lose track of what actually runs on the page.
That’s the use case we focused on in our AI Privacy Advisor inside AesirX CMP Pro for WordPress - as a functional workflow, not a chatbot: it scans what loads before consent, scans again after consent, proposes WordPress-native blocking rules, and generates draft privacy/cookie/consent text based on observed reality - then you can verify the change by re-running the scan.
The real problem: websites don’t know what they run
If you want a practical definition of “functional AI,” it’s this:
Turn a messy system into an observable system.
Most websites (especially WordPress sites) can’t answer a simple operational question with confidence:
“What exactly executes on first page load - and what changes after a user makes a consent choice?”
That’s not because teams don’t care. It’s because modern sites are assembled from moving parts: plugins, tag managers, embedded widgets, A/B tools, CDN scripts, marketing pixels, chat tools - and they change constantly. Over time, the site becomes a system that behaves, but is no longer understood.
And this is where compliance breaks down, not at the banner layer, but at the runtime layer: pre-consent execution happens by default, and nobody can reliably prove what fired, when, and why.
“The web’s privacy failure is operational drift at scale.”
When we ran our July 2025 privacy scan of 36.496 Danish business websites, the most important takeaway wasn’t “people wrote bad policies.” It was that at scale, the web behaves like an un-audited runtime - and the measurements showed a high rate of risk and pre-consent third-party loading patterns at ~73% of the scanned sites. That’s not a Denmark or EU-only phenomenon; in our VN work, we repeatedly see the same failure modes, often at higher baseline rates (~90%).
So the question isn’t “how do we write better privacy policies?”
It’s: how do we continuously align what the site does with what the site claims - without turning every change into a manual forensic exercise?
That’s the kind of job functional AI is good at: narrow scope, repeatable checks, measurable before/after outcomes - and no need for surveillance.

Functional AI is the opposite of surveillance AI
If AI is broad, hungry, and “always learning,” it tends to push toward extraction. That’s the business model of surveillance: more inputs, more profiling, more inference, more risk.
Functional AI is narrow by design: fixed job, bounded permissions, explainable changes, verifiable outcomes.
The point isn’t to be impressive. The point is to be reliable
“Functional AI isn’t smaller ambition - it’s bigger responsibility.”
And there’s a sustainability angle too: broad models running everywhere invite wasted compute. Functional AI is inherently more “energy honest” - because it doesn’t need infinite context to do a single job well.
Now combine those ideas with the consent compliance problem: you don’t need an AI that can philosophize about privacy. You need one that can do a boring, critical workflow consistently.
Which leads to a practical definition of “useful AI” for websites:
Show me what loads before consent. Show me what loads after consent. Help me align configuration and documentation with what actually happens.
That’s the concept behind our AI Privacy Advisor.
The AI Privacy Advisor: making compliance observable
Consent compliance fails when it becomes a document-only exercise. “Functional AI” changes the approach: it treats compliance as system state. It’s essentially a before/after experiment harness for consent.
The AI Privacy Advisor workflow is straightforward and intentionally limited:
- Scan the site before consent to detect third-party requests, scripts, beacons, and high-risk technologies.
- Scan again after consent to see what changes when the user opts in.
- propose platform-native controls (in WordPress terms) that can block what must not load pre-consent.
- Generate drafts for the required texts (privacy policy sections, cookie/technology declaration, and consent language) based on the observed behavior, not generic templates.
- Keep the scope bounded: it’s not “AI that does everything,” it’s “AI that does this job and proves it”.
This is also where the WordPress “Agent Skills” direction becomes relevant: the future of trustworthy AI on platforms is “reason → permission → execute → verify.” Privacy is exactly the kind of domain that demands that structure.
And that circles back to why I started this newsletter with my daughter.

From consent to communication: AI that earns trust
I’m building privacy tools because the internet has normalized “extract first, explain later.”
But I’m also thinking about AI for the same reason I think about privacy: human dignity.
When Ida Marie raises her finger, the world treats it like noise because it can’t decode the signal. Functional AI could be the bridge - not by “knowing everything,” but by being trained for a narrow purpose: interpret her signals, help translate intent, and let her be understood.
“Assistive AI and privacy AI share the same moral constraint: the goal is empowerment, not extraction.”
That’s the same ethical structure I want in privacy compliance AI:
- Do something specific
- Do it verifiably
- Do it without turning people into telemetry
So yes - the AI Privacy Advisor is a product feature. But the deeper idea is broader and, I think, more important:
The future of sustainable AI isn’t bigger. It’s bounded.
Bounded in scope, bounded in permissions, bounded in power consumption - and bounded by purpose.
Ronni K. Gothard Christiansen
Technical Privacy Engineer & CEO @ AesirX.io
