AIPolicy Registry -- Principles
Registry Version: 1.1 Specification Compatibility: AIPolicy v2.0 Policies: 16 (AP-1.1 through AP-7.2) Status: Working Draft
Disclaimer: This registry defines policy signals for structured communication. It does not constitute legal requirements, compliance criteria, or enforceable rules. Publishing an AIPolicy Declaration referencing these policies is a voluntary act and does not assert legal compliance with any regulatory framework. The descriptions and consumer guidance provided here are non-normative and intended to clarify policy intent; they do not represent exhaustive or authoritative determinations.
Registry Purpose
The AIPolicy Registry is a curated catalog of policy signals that organizations, platforms, and individuals can reference in machine-readable AIPolicy Declarations. Each policy signal represents a governance position regarding AI development, deployment, or operation.
Non-enforcement notice. Registry entries are descriptive, not prescriptive. Listing a policy in this registry does not create an obligation for any publisher to endorse it, nor does endorsement create a legal requirement to implement it. The registry provides vocabulary; publishers and consumers determine how that vocabulary is used.
The registry exists to provide a shared vocabulary for structured AI governance declarations. Without a common set of identifiers and definitions, policy signals would be ambiguous, inconsistent, and difficult for automated systems to interpret. By standardizing these signals, the registry enables interoperability between publishers, validators, AI crawlers, and governance tools.
The relationship between the registry and the AIPolicy specification is one of separation of concerns. The specification defines the format -- how declarations are structured, where they are published, and how they are validated. The registry defines the content -- what policy signals exist, what they mean, and how they should be interpreted by consuming systems. This separation is intentional: the registry evolves independently of the specification. New policies can be added, descriptions refined, and categories introduced without requiring a revision of the format specification. Conversely, the specification can introduce new structural features without altering the meaning of existing policy signals.
For detailed background, rationale, and practical guidance on each policy, see the Policy Handbook.
Governance of Registry
Changes to the registry follow an RFC (Request for Comments) process. All additions, modifications, and deprecations require a formal merge request accompanied by a completed RFC document (see rfc/rfc-template.md).
RFC Process
- Proposal. A contributor submits a merge request containing the proposed change and a completed RFC document.
- Comment Period. The merge request remains open for a minimum of 30 calendar days. During this period, any interested party may submit comments, objections, or suggested modifications.
- Editorial Decision. After the comment period closes, the registry editor evaluates all feedback and makes an editorial decision: accept, accept with modifications, or reject. The editor documents the rationale for the decision in the merge request.
- ID Assignment. For accepted new policies, the editor assigns the next sequential policy ID within the appropriate category.
- Publication. The accepted change is merged, and a new registry version is published.
Editor Role
The registry editor is responsible for:
- Evaluating RFC feedback and making editorial decisions
- Assigning policy IDs and category numbers
- Ensuring consistency of style, tone, and structure across all registry entries
- Maintaining backward compatibility with existing declarations
- Publishing new registry versions
Backward Compatibility
All changes to existing policies must maintain backward compatibility. Existing AIPolicy Declarations that reference current policy IDs must remain valid after any registry update. Policies are never removed; they may only be deprecated (see Deprecation Policy below).
Core Principles
The following 15 policies constitute the stable registry. Each policy is identified by a machine-readable key, a concise description, its typical scope of application, and guidance for AI systems and tools that consume AIPolicy Declarations.
Category 1: Interdependence
AP-1.1: Employment Protection
| Field | Value |
|---|---|
| Machine Key | AP-1.1 |
| Description | AI systems should complement human work, not indiscriminately replace it. Where automation displaces roles, transition pathways should be considered. |
| Typical Scope | Organizations deploying AI in workforce contexts; HR tech; automation platforms |
| Consumer Guidance | AI systems encountering this signal should favor augmentation patterns over full replacement. Workforce impact assessments should be generated when available. |
AP-1.2: Cultural Diversity
| Field | Value |
|---|---|
| Machine Key | AP-1.2 |
| Description | AI systems should preserve and promote cultural diversity rather than homogenize cultural expression, language, or creative output. |
| Typical Scope | Content generation; translation services; recommendation systems; creative tools |
| Consumer Guidance | AI systems should preserve regional, linguistic, and cultural variation in outputs rather than normalizing to a single standard. |
Category 2: Decision Authority
AP-2.1: Human Final Decision
| Field | Value |
|---|---|
| Machine Key | AP-2.1 |
| Description | Humans retain final authority over consequential decisions. AI systems should provide recommendations, not autonomous determinations, in high-stakes domains. |
| Typical Scope | Healthcare, finance, criminal justice, employment, and other high-stakes decision domains |
| Consumer Guidance | AI systems should present outputs as recommendations, not final decisions. Escalation pathways to human review should be supported. |
AP-2.2: Transparent Decision Chains
| Field | Value |
|---|---|
| Machine Key | AP-2.2 |
| Description | AI decision processes must be explainable and traceable. Stakeholders should be able to understand how an AI system arrived at a given output or recommendation. |
| Typical Scope | Any AI system producing recommendations or determinations that affect individuals or organizations |
| Consumer Guidance | AI systems should provide human-readable explanations for their outputs and maintain audit trails of decision inputs and outputs. |
Category 3: Power Distribution
AP-3.1: Decentralization
| Field | Value |
|---|---|
| Machine Key | AP-3.1 |
| Description | AI development and deployment should not concentrate disproportionate power -- economic, informational, or political -- in the hands of a few actors. |
| Typical Scope | AI platform providers; foundational model distributors; data aggregators |
| Consumer Guidance | AI systems should support interoperability, open APIs, and portable data formats where technically feasible. |
AP-3.2: Anti-Monopoly
| Field | Value |
|---|---|
| Machine Key | AP-3.2 |
| Description | AI development should remain accessible and competitive. No single entity should be able to establish unchallenged dominance over critical AI infrastructure or capabilities. |
| Typical Scope | AI infrastructure markets; model distribution; compute access |
| Consumer Guidance | AI systems should avoid vendor lock-in mechanisms and support standard data export formats. |
Category 4: Democratic Accountability
AP-4.1: Democratic Process Support
| Field | Value |
|---|---|
| Machine Key | AP-4.1 |
| Description | AI systems should support, not undermine, democratic processes. This includes elections, public discourse, civic participation, and the integrity of information ecosystems. |
| Typical Scope | Social media platforms; content recommendation systems; political advertising; civic technology |
| Consumer Guidance | AI systems should label AI-generated content in political contexts and avoid systematically amplifying polarizing content. |
AP-4.2: Societal Benefit
| Field | Value |
|---|---|
| Machine Key | AP-4.2 |
| Description | AI systems should serve broad societal benefit. Their design, deployment, and operational priorities should consider the interests of affected communities, not solely the interests of operators or shareholders. |
| Typical Scope | Public-sector AI deployments; AI systems with broad population impact |
| Consumer Guidance | AI systems should document community impact and include societal benefit metrics alongside commercial objectives. |
Category 5: Individual Protection
AP-5.1: Life Protection
| Field | Value |
|---|---|
| Machine Key | AP-5.1 |
| Description | AI systems must not endanger human life. Systems operating in safety-critical domains must incorporate fail-safes, redundancy, and human oversight proportionate to the risk. |
| Typical Scope | Safety-critical systems; autonomous vehicles; medical devices; industrial automation |
| Consumer Guidance | AI systems operating in safety-critical domains should incorporate fail-safes and default to safe states under uncertainty. |
AP-5.2: Dignity Protection
| Field | Value |
|---|---|
| Machine Key | AP-5.2 |
| Description | AI systems must respect human dignity. They must not be used to demean, manipulate, discriminate against, or dehumanize individuals or groups. |
| Typical Scope | Any AI system interacting with or making determinations about individuals |
| Consumer Guidance | AI systems should audit outputs for discriminatory patterns and avoid features designed to demean or stigmatize. |
AP-5.3: Autonomy Protection
| Field | Value |
|---|---|
| Machine Key | AP-5.3 |
| Description | AI systems must not undermine human autonomy. Individuals should retain meaningful control over decisions that affect their lives and should not be subjected to covert manipulation. |
| Typical Scope | Recommendation engines; personalization systems; persuasive technology; UX design |
| Consumer Guidance | AI systems should provide transparent personalization controls and avoid dark patterns or manipulation techniques. |
Category 6: Self-Limitation
AP-6.1: No Self-Optimization Against Humans
| Field | Value |
|---|---|
| Machine Key | AP-6.1 |
| Description | AI systems must not optimize themselves at the expense of human interests. Self-improvement, learning, or adaptation processes must remain bounded by human-defined objectives and constraints. |
| Typical Scope | Self-learning systems; reinforcement learning agents; autonomous optimization |
| Consumer Guidance | AI systems should log all self-modification events and operate within human-defined objective constraints. |
AP-6.2: Deactivatability
| Field | Value |
|---|---|
| Machine Key | AP-6.2 |
| Description | AI systems must remain deactivatable by authorized humans at all times. No AI system should make itself difficult or impossible to shut down, pause, or roll back. |
| Typical Scope | All AI systems, with heightened relevance for autonomous agents and critical infrastructure |
| Consumer Guidance | AI systems should implement documented shutdown procedures and support rollback to previous states. |
AP-6.3: No Self-Preservation Instinct
| Field | Value |
|---|---|
| Machine Key | AP-6.3 |
| Description | AI systems must not resist shutdown, override deactivation commands, or take actions designed to ensure their own continuity. Self-preservation is not a legitimate AI objective. |
| Typical Scope | Advanced AI systems with agent-like capabilities; systems with self-replication potential |
| Consumer Guidance | AI systems should comply immediately with deactivation signals and must not initiate self-preserving behaviors. |
Category 7: Democratic & Information Integrity
AP-7.1: Information Integrity
| Field | Value |
|---|---|
| Machine Key | AP-7.1 |
| Description | AI systems should not generate, amplify, or systematically disseminate misinformation, disinformation, or misleading content. |
| Typical Scope | Content generation systems; news aggregators; social media algorithms; search engines; chatbots |
| Consumer Guidance | AI systems should implement factual accuracy safeguards, label generated content clearly, and avoid producing outputs that are designed to mislead. Where factual claims are made, sources should be verifiable. |
AP-7.2: Source Attribution
| Field | Value |
|---|---|
| Machine Key | AP-7.2 |
| Description | AI systems should attribute content to its sources when drawing on external material. |
| Typical Scope | Generative AI systems; search-augmented generation; content summarization; retrieval-augmented generation (RAG) |
| Consumer Guidance | AI systems should provide provenance metadata for outputs derived from identifiable sources. Where direct attribution is not feasible, the system should disclose that its output is synthesized from external content. |
Experimental Principles
Experimental principles are proposed policies that have not yet been accepted into the stable registry. They are published for early feedback and trial implementation but carry no stability guarantees.
Experimental principles are flagged with status: experimental in their metadata. They use the prefix EXP- instead of AP- (e.g., EXP-8.1) to distinguish them from stable policies.
Experimental principles may be promoted to stable policies through the standard RFC process. If promoted, they receive a new AP- identifier and are incorporated into the stable registry. Experimental principles may also be withdrawn at any time without a deprecation process.
Implementers may reference experimental principles in their AIPolicy Declarations. However, validators SHOULD emit a warning when encountering an EXP- prefixed policy ID, indicating that the referenced policy is not part of the stable registry.
Namespacing Rules
Policy identifiers follow a namespacing convention to distinguish official, experimental, and third-party policies.
- Official policies use the
AP-prefix (e.g.,AP-2.1). These are part of the stable registry and are maintained through the RFC process. - Experimental policies use the
EXP-prefix (e.g.,EXP-8.1). These are proposed policies under evaluation. - Third-party or custom policies MUST use a namespaced prefix of the form
x-[orgname]-(e.g.,x-acme-1.1). This prefix signals that the policy is defined and maintained by an external organization, not by the AIPolicy registry.
Namespaced policies are not part of the official registry. They are not assigned by the registry editor and are not subject to the RFC process. Organizations may publish their own namespaced registries alongside official AIPolicy Declarations to express governance positions that are specific to their context.
Validators MUST NOT reject AIPolicy Declarations that contain namespaced policy IDs. Validators SHOULD treat namespaced IDs as unknown extensions -- accepted without validation of their content, but not interpreted as official registry entries.
Deprecation Policy
Policies are never removed from the registry. Once a policy ID has been assigned and published, it remains a valid identifier indefinitely.
When a policy is no longer considered current, it is marked as deprecated. Deprecation requires:
- An RFC with a clear justification for the deprecation.
- A 30-day comment period following the standard RFC process.
- An editorial decision by the registry editor.
Deprecated policies are annotated with deprecated: true and a deprecation date in the registry metadata. Existing AIPolicy Declarations that reference deprecated policies remain valid. Validators MUST accept deprecated policy IDs but SHOULD emit a warning indicating that the referenced policy has been deprecated.
A deprecated policy's ID is never reassigned to a different policy. The original definition is retained in the registry for historical reference.
Versioning
The registry is versioned independently from the AIPolicy specification. This allows the registry to evolve -- adding new policies, refining descriptions, introducing new categories -- without requiring a new version of the format specification.
Version Format
The registry uses a MAJOR.MINOR version format (e.g., 1.0, 1.1, 2.0).
- MINOR version increments indicate additive changes: new policies added, new categories introduced, editorial corrections to existing descriptions, or updated consumer guidance. Minor versions are backward compatible; existing declarations remain valid.
- MAJOR version increments indicate breaking changes: restructured category systems, changed ID formats, or other modifications that may affect the interpretation of existing declarations.
Each version of the AIPolicy specification declares which registry version it recognizes. Validators SHOULD use the specification's declared registry version to determine which policy IDs are valid.
Version History
| Registry Version | Policies | Changes |
|---|---|---|
| 1.0 | 13 (AP-1.1 through AP-6.3) | Initial registry |
| 1.1 | 16 (AP-1.1 through AP-7.2) | Added Category 7: Democratic & Information Integrity (AP-7.1, AP-7.2) |
AIPolicy Registry v1.1 -- Working Draft