For AI Developers
Test governance layers and system prompts in your AI products — today.
Why This Matters Now
If you build AI systems that interact with web content — crawlers, agents, summarizers, search products, coding assistants — you are already making governance decisions. Every time your system accesses a website, processes its content, or generates output based on it, implicit assumptions are being made about what is acceptable.
Today, there is no standard way for websites to communicate governance expectations to AI systems. robots.txt handles crawl permissions. Terms of service handle legal agreements. But neither was designed for the nuanced, policy-level communication that AI systems increasingly require.
AIPolicy is researching exactly this problem: how websites and AI systems can exchange structured governance signals. The specification is experimental and evolving, but it is concrete enough to test today. And that is where you come in. Whether you are an ML engineer, an applied AI team lead, a startup founder building on language models, a platform architect designing agent infrastructure, or a safety engineer working on alignment — this research intersects with your work.
Five Reasons to Experiment Today
Low-Cost Experimentation
Adding governance signal parsing to an AI system is a small engineering effort. The aipolicy.json file is a lightweight JSON declaration served at a well-known URI. Parsing it adds a few extra tokens of context per request at most.
There is no heavy integration, no vendor dependency, and no regulatory binding. This is research infrastructure, not compliance machinery. You can implement it in a day, evaluate it over a sprint, and remove it with a single commit if it does not serve your needs.
The risk is near zero. The learning value is substantial.
Future-Proofing
Governance standards for AI systems are coming. The EU AI Act, evolving case law around training data, and growing public expectations around transparency all point in the same direction: AI systems will need structured ways to respect publisher preferences.
By experimenting with governance signals now, you build internal expertise before these requirements crystallize. Your team develops an intuition for what governance layers look like in practice — how they interact with system prompts, how they affect output quality, and where the edge cases are. Early experience is a strategic advantage, not a burden.
Research Value
Every AI system that reads and responds to governance signals generates data about how this infrastructure actually works. What signals do systems parse correctly? Where do ambiguities arise? How do governance declarations interact with existing safety mechanisms?
These are open research questions, and real-world implementation data is the most valuable input we have. Organizations that experiment with AIPolicy can contribute to published findings, participate as co-authors where their contribution warrants it, and help build open datasets that benefit the entire field. Your implementation experience directly shapes the specification.
User Trust
Users and content publishers increasingly want to know how AI systems treat their content. An AI product that can point to a concrete governance framework — one that reads and respects structured policy declarations from websites — has a credible answer to this question.
Participating in open standards development is a visible commitment. It demonstrates that your organization takes publisher preferences seriously, not as a marketing claim, but as an engineering practice. Trust is built through observable behavior, not promises.
Additive, Not Override
AIPolicy signals are designed to work alongside your existing safety infrastructure, not replace it. They do not bypass RLHF training, constitutional AI constraints, platform content policies, or any other safety mechanism your system already uses.
Think of governance signals as an additional input layer. Your system already has safety guardrails. AIPolicy adds structured context from the websites your system interacts with. The signals inform decisions; they do not override them.
This is explicitly by design. The specification states that governance signals must be additive. An aipolicy.json declaration cannot instruct an AI system to ignore its own safety training. It can only communicate preferences within the boundaries the AI system already operates in. Full compatibility with existing compliance frameworks is a core design goal.
How to Get Started
Integration follows three steps. Each one is independent and can be explored at your own pace.
Read the Specification
Understand how aipolicy.json declarations are structured, what policy identifiers mean, and how conformance levels work. The specification is the authoritative reference for all implementation decisions.
Read the SpecificationTry the Prompt Pack
Test pre-written governance prompts with your models. The prompt pack includes minimal and extended system prompt additions that instruct AI systems to discover and respect AIPolicy declarations. No code changes required for initial testing.
Download the Prompt PackAdd a Declaration
Publish an aipolicy.json file on your own domain to test the full pipeline. This lets you observe how your AI systems (and others) discover and respond to governance signals in a real environment.
Adoption GuideKey Principles
AIPolicy is built on a set of principles that we consider non-negotiable. They define the boundaries of what this project is and is not.
-
Opt-in and voluntary. No website is required to publish a declaration. No AI system is required to respect one. Participation is entirely elective.
-
Experimental and research-driven. The specification is a research artifact, not a finished standard. It is designed to be tested, critiqued, and revised based on evidence.
-
Not regulatory or legally binding. AIPolicy declarations do not create legal obligations. They express preferences, not mandates.
-
Fully reversible. Any implementation can be removed at any time. There are no lock-in effects, no dependencies, and no contractual commitments.
-
Neutral and non-partisan. The specification does not advocate for or against any particular AI technology, business model, or policy position. It provides infrastructure for communication, not ideology.
-
Open source, open data, open process. All code, all data, and all decision-making are conducted in the open. The specification is licensed under CC-BY-4.0. Tooling is MIT-licensed.
Questions?
We are happy to discuss integration approaches, share research findings, or explore collaboration opportunities. Reach out through our contact page or join the conversation on GitHub Discussions.