AIPolicy -- Policy Categories
Registry Version: 1.1 Specification Compatibility: AIPolicy v2.0 Status: Working Draft
Category Taxonomy
| ID | Name | Policy Count | Description |
|---|---|---|---|
| 1 | Interdependence | 2 | Policies addressing the mutual dependency between humans and AI systems, including workforce complementarity and cultural preservation |
| 2 | Decision Authority | 2 | Policies ensuring that humans retain final authority over consequential decisions and that AI decision processes remain transparent |
| 3 | Power Distribution | 2 | Policies preventing the concentration of disproportionate power through AI development, deployment, or market control |
| 4 | Democratic Accountability | 2 | Policies requiring AI systems to support democratic processes and serve broad societal benefit |
| 5 | Individual Protection | 3 | Policies safeguarding human life, dignity, and autonomy from harm caused by AI systems |
| 6 | Self-Limitation | 3 | Policies requiring AI systems to recognize and respect their own operational boundaries, including deactivatability and the prohibition of self-preservation behavior |
| 7 | Democratic & Information Integrity | 2 | Policies addressing the accuracy of AI-generated information and the attribution of content sources |
Total: 7 categories, 15 policies
Category Details
Category 1: Interdependence
Interdependence addresses the relationship between human activity and AI systems, with particular attention to labor markets and cultural ecosystems. The premise is that AI systems operate within human societies and should contribute to, rather than erode, the structures that sustain those societies.
This category recognizes that automation and AI-generated content can deliver efficiency gains while simultaneously posing risks to employment stability and cultural diversity. The policies in this category signal a preference for AI deployments that augment human capabilities and preserve the plurality of human expression.
Policies:
- AP-1.1: Employment Protection -- AI systems should complement human work, not indiscriminately replace it.
- AP-1.2: Cultural Diversity -- AI systems should preserve and promote cultural diversity rather than homogenize.
Category 2: Decision Authority
Decision Authority addresses the allocation of decision-making power between humans and AI systems. As AI systems become capable of producing recommendations and determinations in high-stakes domains -- healthcare, finance, criminal justice, employment -- the question of who holds final authority becomes critical.
This category establishes a preference for AI systems that inform rather than decide, and that make their reasoning processes available for scrutiny. Transparency and traceability are treated as prerequisites for meaningful human oversight.
Policies:
- AP-2.1: Human Final Decision -- Humans retain final authority over consequential decisions.
- AP-2.2: Transparent Decision Chains -- AI decision processes must be explainable and traceable.
Category 3: Power Distribution
Power Distribution addresses the structural risks of AI-driven concentration of economic, informational, or political power. AI systems can amplify existing power asymmetries by creating dependencies, controlling critical infrastructure, or establishing barriers to entry that foreclose competition.
This category signals a preference for AI ecosystems that remain open, interoperable, and accessible to a plurality of actors. It applies to market structure, data access, and the governance of foundational AI infrastructure.
Policies:
- AP-3.1: Decentralization -- AI should not concentrate power in few hands.
- AP-3.2: Anti-Monopoly -- AI development should remain accessible, not controlled by single entities.
Category 4: Democratic Accountability
Democratic Accountability addresses the relationship between AI systems and democratic governance. AI systems that mediate public discourse, influence electoral processes, or shape access to information have the potential to either strengthen or undermine democratic institutions.
This category also encompasses the broader expectation that AI systems should serve societal interests. It signals a preference for AI deployments that consider the interests of affected communities and contribute measurably to public welfare.
Policies:
- AP-4.1: Democratic Process Support -- AI should support, not undermine, democratic processes.
- AP-4.2: Societal Benefit -- AI systems should serve broad societal benefit.
Category 5: Individual Protection
Individual Protection addresses the direct impact of AI systems on human beings. This category covers three dimensions of individual welfare: physical safety (life protection), psychological and social integrity (dignity protection), and freedom of choice (autonomy protection).
These policies apply wherever AI systems interact with, affect, or make determinations about individual humans. They signal a preference for AI systems that incorporate safety mechanisms, avoid discriminatory or demeaning behavior, and refrain from manipulating individual decision-making.
Policies:
- AP-5.1: Life Protection -- AI systems must not endanger human life.
- AP-5.2: Dignity Protection -- AI must respect human dignity.
- AP-5.3: Autonomy Protection -- AI must not undermine human autonomy.
Category 6: Self-Limitation
Self-Limitation addresses the internal behavioral boundaries of AI systems. As AI systems become more capable of self-modification, optimization, and autonomous operation, the question of whether they respect human-defined constraints becomes increasingly material.
This category establishes preferences regarding three aspects of AI self-governance: that optimization processes remain bounded by human objectives, that systems remain deactivatable at all times, and that no AI system pursues its own continuity as a goal. These policies are particularly relevant for advanced AI systems with learning, adaptation, or agent-like capabilities.
Policies:
- AP-6.1: No Self-Optimization Against Humans -- AI must not optimize itself at the expense of human interests.
- AP-6.2: Deactivatability -- AI systems must remain deactivatable.
- AP-6.3: No Self-Preservation Instinct -- AI must not resist shutdown or override.
Category 7: Democratic & Information Integrity
Democratic & Information Integrity addresses the responsibility of AI systems to maintain the accuracy of information they produce and to acknowledge the sources of content they incorporate. As generative AI systems increasingly produce and mediate content, the risks of misinformation amplification and unattributed content use become systemic concerns.
This category establishes preferences for AI systems that implement factual accuracy safeguards, clearly label generated content, and provide provenance metadata when drawing on external material. These policies are relevant wherever AI systems generate, summarize, or redistribute content.
Policies:
- AP-7.1: Information Integrity -- AI systems should not generate, amplify, or disseminate misinformation.
- AP-7.2: Source Attribution -- AI systems should attribute content to its sources.
Adding New Categories
New categories require the RFC process defined in /rfc/README.md. Proposals for new categories MUST include:
- A clear statement of the governance concern the category addresses
- At least one concrete policy definition within the proposed category
- Evidence that the concern is not adequately covered by an existing category
- Testability criteria for all proposed policies
Category IDs are assigned sequentially and MUST NOT be reused after deprecation. Deprecated categories are retained in the registry with a deprecated status flag and a reference to the RFC that effected the deprecation.