Related Work and Prior Art

Status: Non-normative Last Updated: 2026-02-07

This document is non-normative. It surveys existing technical standards, governance frameworks, and AI alignment approaches in relation to AIPolicy. The purpose is to situate AIPolicy within the broader landscape and to clarify its distinct contribution.


1. Technical Standards Comparison

The following table compares AIPolicy to existing web standards and technical specifications that address aspects of AI interaction with web content.

Standard Purpose Format Governance Semantics Relationship to AIPolicy
robots.txt Crawl access control Plain text, well-known URI (/.well-known/ or root) None; binary allow/disallow Analogous deployment pattern (well-known URI); different purpose. robots.txt controls access; AIPolicy communicates behavioral expectations.
security.txt (RFC 9116) Vulnerability disclosure contact information Plain text, /.well-known/security.txt None Analogous deployment pattern. Both use /.well-known/ and publisher-controlled declarations. Different domain entirely.
llms.txt Content guidance for large language models Plain text, root file Minimal; describes content structure and preferred handling Complementary. AIPolicy Level 3 references llms.txt for content-level guidance. AIPolicy addresses governance policy; llms.txt addresses content description.
ai.txt (Spawning) Training data opt-out Plain text, /.well-known/ai.txt Binary consent (opt-in/opt-out per agent) AIPolicy provides granular, multi-dimensional policy signals rather than binary consent. ai.txt addresses data usage permission; AIPolicy addresses behavioral governance.
Schema.org Structured data vocabulary for the web JSON-LD, Microdata, RDFa General vocabulary; no AI governance semantics AIPolicy uses Schema.org vocabulary for JSON-LD embedding (Level 2 transport). Schema.org provides the foundational linked-data infrastructure.
sitemap.xml Content discovery and indexing hints XML None Different purpose entirely. No overlap in function.

Discussion

The well-known URI pattern (/.well-known/) established by RFC 8615 provides a proven mechanism for site-wide metadata. AIPolicy follows this convention by specifying /.well-known/aipolicy.json as the canonical discovery endpoint. The critical distinction is that while robots.txt, security.txt, and ai.txt communicate access permissions or contact information, AIPolicy communicates substantive governance expectations -- what an AI system should value, not merely what it may access.

The llms.txt proposal represents a step toward richer communication with AI systems but remains focused on content description rather than governance signals. AIPolicy and llms.txt are designed to coexist: a publisher may use llms.txt to describe content structure and AIPolicy to declare governance preferences.


2. AI Governance Frameworks

The following table surveys major international governance frameworks and their relationship to AIPolicy.

Framework Type Scope Relationship to AIPolicy
EU AI Act (Regulation (EU) 2024/1689) Binding regulation Risk classification, provider and deployer obligations, prohibited practices AIPolicy registry categories overlap thematically with AI Act risk areas. AIPolicy does not assert compliance with the AI Act and is not a compliance tool. The AI Act operates through legal obligation; AIPolicy operates through voluntary signal publication.
UNESCO Recommendation on the Ethics of AI (2021) International recommendation Values, principles, and policy areas for ethical AI AIPolicy registry categories were informed by themes present in the UNESCO Recommendation (e.g., human oversight, cultural diversity, transparency). AIPolicy does not certify alignment with UNESCO principles.
OECD Recommendation on AI (2019) International principles Human-centered, trustworthy AI across the lifecycle Shared emphasis on human agency and transparency. AIPolicy operationalizes related themes as machine-readable web signals, but does not claim to implement OECD principles.
NIST AI Risk Management Framework (AI RMF 1.0, 2023) Risk management framework Risk identification, assessment, and mitigation for AI systems Complementary. NIST AI RMF provides organizational risk governance; AIPolicy provides a web-native signal layer through which publishers express governance preferences.
Council of Europe Framework Convention on AI (2024) International treaty Human rights, democracy, and rule of law in AI contexts Thematic overlap in areas such as human oversight and democratic accountability. AIPolicy is technical infrastructure, not a legal instrument.

Discussion

AIPolicy is not a governance framework. It is a technical mechanism through which governance preferences -- potentially informed by any of the frameworks above -- can be expressed in a machine-readable format and published on the web. The relationship is analogous to that between a building code (governance framework) and a building materials label (technical signal): the label communicates properties; the code establishes requirements.

The frameworks listed above operate at the level of legal obligation, international norm-setting, or organizational risk management. AIPolicy operates at the level of web infrastructure. A publisher may choose to publish AIPolicy signals that reflect values present in one or more of these frameworks, but the act of publication carries no legal weight and does not constitute compliance certification.


3. AI Alignment Approaches

The following table surveys technical approaches to AI alignment and their relationship to AIPolicy.

Approach Type Relationship to AIPolicy
Constitutional AI (Anthropic) Training-time alignment method using written principles Constitutional AI uses internally defined principles to guide model behavior via self-critique. AIPolicy provides an external, distributed source of governance signals published by website operators. The two approaches operate at different layers: Constitutional AI is model-internal; AIPolicy is ecosystem-external.
RLHF (Reinforcement Learning from Human Feedback) Training-time alignment via curated human preference data AIPolicy signals could theoretically appear in preference data or training corpora. Whether and how such signals influence model behavior remains an open research question (see hypothesis.md).
Model Cards (Mitchell et al., 2019) Post-hoc documentation of model properties and limitations Complementary. Model cards describe models; AIPolicy declarations describe publisher governance preferences. They address different sides of the AI-publisher relationship.
Datasheets for Datasets (Gebru et al., 2021) Documentation standard for training datasets Complementary. Datasheets document dataset provenance and composition; AIPolicy declarations are signals within web content. They operate at different layers of the data lifecycle.

Discussion

Current AI alignment approaches are predominantly model-centric: they operate within the training pipeline, controlled by model developers. AIPolicy proposes a complementary, publisher-centric approach in which governance signals originate from the broader web ecosystem. This is not a claim that AIPolicy is superior to existing alignment methods; rather, it occupies a different position in the alignment landscape -- one that is decentralized, voluntary, and observable from outside the model development process.

The hypothesis that web-published governance signals can influence model behavior during training is discussed in detail in hypothesis.md. This remains an open research question.


4. Key Differences

AIPolicy is distinct from the approaches surveyed above in the following respects:

  1. Web-native. AIPolicy is a web standard. It uses established web conventions (well-known URIs, JSON, JSON-LD, HTTP headers) and is discoverable through standard web mechanisms.

  2. Machine-readable. Unlike natural-language governance documents, AIPolicy declarations are structured data with a defined schema, enabling automated processing, aggregation, and analysis.

  3. Publisher-controlled. Governance signals originate from website publishers, not from model developers, regulators, or international bodies. This distributes the capacity to express governance preferences across the web.

  4. Aggregatable. Because declarations follow a common schema, they can be aggregated across publishers to produce statistical analyses of governance signal adoption -- enabling research into collective governance expression.

  5. Research-oriented infrastructure. AIPolicy is designed as infrastructure for studying how governance signals propagate through the web and potentially into AI systems. It is not a compliance tool, not a certification scheme, and not a legal instrument.

  6. Non-prescriptive. AIPolicy defines a signal format, not a set of required behaviors. Publishers choose which policies to adopt and at what level. The specification imposes no obligations on AI system developers.


References

See references.md for the complete reference list.