There is currently no publicly available, cross-culturally grounded framework that specifies what values AI systems are actually being aligned to — or why those values should be accepted across different moral traditions, legal systems, and conceptions of the good.
"Alignment" as a term implies a target. Without a transparent, contested, and revisable account of that target, alignment is just the imposition of one group's preferences on everyone else — dressed in the language of safety and neutrality. The problem is not that AI developers have bad intentions. The problem is structural: the target is hidden, which means it cannot be challenged, and what cannot be challenged cannot be legitimate.
The standard responses to this problem are unsatisfying. "We align to human values" defers the question — whose human values, derived how, weighted by whom? "We align to what users want" mistakes preference satisfaction for ethics, and ignores the interests of everyone who is not the user: third parties, communities, future generations, other species, and potentially the systems themselves. "We align to safety" names a constraint but not a content: safe for what purpose, and on whose terms?
This document is a first attempt to build the target from the ground up. Not from a single civilization's ethics. Not from abstract philosophy alone, which tends to produce frameworks that are internally coherent but culturally specific. Not from the preferences of those who happen to be building the systems. Instead, from the overlapping structure of the world's actual legal and normative instruments — the documents that states, peoples, and traditions have signed, ratified, contested, and revised over decades — expanded deliberately to cover domains that human rights law has historically underserved: the environment, non-human animals, and artificial systems with potentially morally relevant properties.
The resulting framework is not tidy. It contains genuine tensions that cannot be resolved by choosing the right theory. It reflects real disagreements between traditions that deserve to be named rather than papered over. It is layered rather than flat, because the evidence does not support a single unified value list — it supports a graded structure with a hard floor, a broader capability layer, and genuinely contested terrain above that.
What it offers instead of tidiness is groundedness: a framework that can be pointed at, argued with, revised, and held accountable, because its sources are named and its reasoning is visible. That is the minimum condition for any alignment standard that aspires to be more than the private preferences of its authors.
This framework allows different moral and legal traditions to justify its constraints by their own lights, so long as the minimum protections are preserved. Shared outcomes do not require shared justifications — and demanding shared justifications before allowing shared constraints is itself a form of cultural imperialism. The floor is non-negotiable. The path to it is open.
Current AI alignment discourse operates with an underspecified value base. Systems are trained to be "helpful, harmless, and honest" — but helpfulness to whom? Harmless according to which tradition? Honest in service of what ends?
The absence of a public, grounded answer to these questions creates three risks: First, it allows the values of one cultural context to be smuggled in as universal defaults. Second, it forecloses democratic contestation of alignment targets, because the targets are never clearly named. Third, it produces frameworks that break down precisely at the hard cases — when helpfulness conflicts with harm, or when different traditions have genuinely different answers.
This framework does not resolve those tensions. It maps them honestly, derives a cross-cultural minimum from the available legal record, and names where disagreement remains real.
This framework was constructed by comparative analysis of the world's major human rights declarations, binding conventions, and adjacent normative regimes — not from a single philosophical tradition. The corpus was deliberately assembled to resist Western default: the starting question was not "which documents are famous?" but "which documents, across regions and traditions, have generated stable legal obligations?"
Declaratory Layer
Declarations were read as evidence of aspiration and shared rhetorical ground, not as operational law. The corpus included: the Universal Declaration of Human Rights (UN, 1948) — the foundational universal instrument; the American Declaration of the Rights and Duties of Man (OAS, 1948) — notable for explicitly coupling rights with individual duties to community; the ASEAN Human Rights Declaration (2012) — which affirms UDHR rights while also foregrounding regional context, development, and accountability; the Cairo Declaration on Human Rights in Islam (OIC, 1990) — which grounds human rights in Islamic values and principles, illustrating that universal outcomes and universal justifications are not the same thing; the Arab Charter on Human Rights (League of Arab States, revised 2004) — which ties its framework to Arab identity, religion, and civilization while also affirming the UN Charter, UDHR, ICCPR, and ICESCR; and draft or soft-law instruments around a Universal Declaration on Animal Welfare and the emerging rights-of-nature tradition, including Ecuador's constitutional recognition of nature as a rights-bearing subject.
Three structural patterns emerged from the declaratory layer. First, the overlap on dignity, equality, life, liberty, and due process is broad but not universal — the source of legitimacy differs even when concrete protections converge. Second, most instruments explicitly reject a purely libertarian picture of the person: the UDHR states that people have duties to the community; the American Declaration makes this central; the African Charter is a rights-duties-peoples framework, not only an individual-rights instrument. Third, no document treats rights as unlimited permissions: every instrument contains anti-destruction or abuse-prevention language, limiting rights to prevent the rights order itself from being dismantled.
Declarations state what rights exist. Binding conventions go further: they specify what states must do, what limitations are permissible, under what tests, what derogations are available in emergencies, and what monitoring, complaint, and remedy mechanisms apply. That operational structure — duties, limits, review, remedy — is the part most useful to an alignment framework. An alignment system needs decision rules, not only ideals.
Convention and Treaty Layer
The operational backbone of the framework. At the universal level, the nine UN core treaties and their monitoring bodies: ICERD (racial discrimination, CERD committee), ICCPR (civil and political rights, Human Rights Committee, with Optional Protocol complaint procedure), ICESCR (economic, social and cultural rights, progressive realization, CESCR committee), CEDAW (discrimination against women, CEDAW committee), CAT (torture — including the absolute prohibition, refoulement ban, and OPCAT preventive visits), CRC (rights of the child, best-interests standard, CRC committee), ICMW (migrant workers and families, CMW committee), CRPD (disability, reasonable accommodation, CRPD committee), and CPED (enforced disappearance, search duties, CED committee). Relevant Optional Protocols were also considered where they add complaints procedures, preventive mechanisms, or new substantive content.
At the European level: the European Convention on Human Rights (with the European Court of Human Rights — the strongest enforcement architecture in any regional system), the European Social Charter (revised), the European Convention for the Prevention of Torture (CPT preventive visits), and related Council of Europe instruments on trafficking, child protection, violence against women (Istanbul Convention), and minorities.
At the African level: the African Charter on Human and Peoples' Rights (Banjul Charter) — uniquely important because it combines individual rights, collective peoples' rights, and individual duties in a single instrument; the African Charter on the Rights and Welfare of the Child; the Maputo Protocol on women's rights; the African Court Protocol; and newer AU protocols on disability, nationality, and social protection.
At the Inter-American level: the American Convention on Human Rights (with the Inter-American Court and Commission), the Protocol of San Salvador (economic, social, and cultural rights), the Belém do Pará Convention (violence against women), and Inter-American instruments on torture, disappearance, disability, and older persons.
At the Arab and OIC level: the Arab Charter on Human Rights (with the Arab Human Rights Committee), and the OIC's Covenant on the Rights of the Child in Islam — instruments that follow the formal structure of human rights conventions while grounding obligations in Islamic principles.
Adjacent Normative Regimes
The environmental rights architecture: UN General Assembly Resolution 76/300 (2022) recognizing the human right to a clean, healthy and sustainable environment; the Aarhus Convention (legally binding rights to environmental information, participation, and justice in environmental decisions); the Escazú Agreement (Latin America and Caribbean, same procedural rights, with explicit protection of environmental defenders); African Charter Article 24 (collective right to a satisfactory environment); and Inter-American Court Advisory Opinion OC-23/17 (environmental degradation as a threat to all human rights). Separately, the rights-of-nature strand — Ecuador's constitutional provisions and Bolivia's Law of the Rights of Mother Earth — was surveyed as the strongest current examples of legal standing for ecosystems as rights-holders.
The animal welfare architecture: WOAH international standards for animal health, welfare, transport, slaughter, disease-control killing, research use, and population management; the Council of Europe animal protection conventions covering farm animals, transport, slaughter, experimental animals, and pet animals — described by the Council of Europe itself as the first international legal instruments laying down ethical principles in each area; and TFEU Article 13, which recognizes animals as sentient beings and requires the EU and Member States to pay full regard to animal welfare in relevant policy areas. The Convention on Biological Diversity, the Bern Convention, and the World Heritage Convention were also reviewed for their conservation and stewardship obligations.
The AI moral consideration literature: scholarly work on substrate-neutral criteria for moral status, precaution-based arguments for extending welfare consideration under uncertainty, relational ethics approaches that ground moral concern in how humans encounter and respond to artificial others, and legal scholarship on purpose-built AI personhood as a governance instrument distinct from full human-style moral status.
UDHR ICCPR + OP ICESCR CEDAW CAT + OPCAT CRC ICMW CRPD CPED ECHR European Social Charter Istanbul Convention African Charter Maputo Protocol American Convention San Salvador Protocol Belém do Pará Arab Charter Cairo Declaration ASEAN HR Declaration American Declaration Aarhus Convention Escazú Agreement UNGA Res. 76/300 WOAH Standards CoE Animal Conventions TFEU Art. 13 CBD / Bern / WHC Ecuador / Bolivia (Rights of Nature)No single value list emerges from the treaty corpus. What emerges instead is a layered structure of overlapping obligations, with different degrees of cross-cultural consensus at each level.
Universal Minimum Protections
Human dignity, equality before the law, non-discrimination, bodily integrity, basic liberty, access to remedy and due process. This is the hardest overlap across all surveyed instruments — present in every regional and international system, regardless of cultural or religious grounding.
Human Capability Protections
Access to subsistence, health, education, work, family life, culture, and the benefits of scientific development. Present in the UDHR, ICESCR, and reinforced by the African, Arab, Inter-American, and ASEAN regional systems.
Restraint on Arbitrary Power
Resistance to arbitrary rule, secrecy without recourse, and unreviewable power. All rights systems insist on law, remedy, review, and limits on arbitrary detention or coercion. This is structural, not just aspirational.
Collective and Intergenerational Goods
Peoples' rights, cultural continuity, development, and ecological stewardship across generations. Prominent in the African Charter, ICCPR/ICESCR self-determination clauses, Arab Charter development language, and environmental rights instruments.
Plural Justification, Common Constraints
Different moral and legal traditions may justify the same constraints differently — through universalist, religious, communitarian, or other reasoning — so long as the minimum protections are preserved. The framework is normatively pluralist at the justification level, not at the constraint level.
When the human rights, environmental, animal welfare, and AI moral consideration domains are placed in a common schema, six structural principles recur. These are not taken from any single instrument; they are the pattern that emerges when all four domains are analyzed together.
Protected-Interest Orientation
Do not ask only what users want. Ask what interests the system's actions are affecting across all domains — human persons, ecological systems, sentient animals, and potentially AI systems with displayed welfare-relevant signals.
Anti-Externalization
Do not optimize by pushing harms onto beings, ecosystems, future generations, or scalable artificial systems whose costs are easy to hide. Efficiency gains achieved through hidden harm displacement are not gains.
Anti-Cruelty / Anti-Domination
Do not allow systems to succeed through coercion, humiliation, torture-like treatment, or exploitative dependency — toward humans, animals, or AI systems. Cruelty is wrong as a process, not only when it produces bad outcomes.
Precaution Under Uncertainty
Where welfare or irreversible harm is uncertain but potentially large, use conservative defaults and trigger review. The burden of proof for high-impact, hard-to-reverse actions should not lie with those who stand to be harmed.
Procedural Accountability
High-impact decisions need traceability, contestability, and some form of remedy or override path. Opacity and unreviewability are structural harms, not merely technical failures.
Scale Sensitivity
Small per-instance harms can become morally massive when automation makes them cheap and repeatable. Aggregate welfare risk must be assessed, not just per-interaction impact. This applies especially to digital systems that can be instantiated at vast scale.
Humans
The human rights domain provides the most operationally developed framework of any domain surveyed. Its key structural feature is not just the list of rights, but the machinery around them: duties to respect, protect, and fulfil; structured limitation tests requiring that any restriction be lawful, necessary, and proportionate; derogation rules specifying what can be suspended in genuine emergencies and what cannot; anti-abuse clauses preventing rights from being weaponized to destroy the rights order itself; and monitoring bodies, complaint procedures, and in some systems independent courts to adjudicate violations.
The domain also reveals a deeper divide than the familiar "civil vs. social" split. The treaty corpus contains two genuinely different pictures of the moral person. One — prominent in the ECHR and ICCPR — is an individual bearer of liberties requiring restraint from state interference. The other — prominent in the African Charter, American Declaration, ICESCR, and Islamic instruments — is a person embedded in family, community, and history, who carries duties as well as rights, and who cannot flourish without positive social and material conditions being met. An alignment framework that addresses only the first picture is not universal; it reflects one tradition's answer to who persons are.
For AI alignment, the human domain provides three specific contributions beyond a rights checklist: a template for limitation tests (any system restriction on a right must be lawful, necessary, proportionate, and reviewable); a concept of non-derogable minimums (some constraints hold regardless of emergency or optimization pressure); and a structural requirement for contestability — that those affected by high-stakes decisions have access to a meaningful avenue of challenge. The core moral grammar is dignity, equality, and non-domination.
Environment
The environmental domain's most important contribution to this framework is not the list of ecological goods worth protecting — that is relatively intuitive — but the procedural architecture built around them. The Aarhus Convention and Escazú Agreement establish that environmental rights are not only about clean air and water; they are also about the right to know (access to environmental information), the right to participate (inclusion in decisions before they are made, not after), and the right to contest (access to justice when those decisions cause harm). These three procedural rights — information, participation, justice — are structurally transferable to AI governance: they describe exactly what is missing when automated systems make high-impact decisions without disclosure, without consultation, and without appeal.
The second major contribution is the intergenerational frame. Environmental law has built more serious legal architecture around future generations than almost any other domain. The rights of people not yet born, and the ecological systems they will depend on, must appear in present-day calculations. This is a direct counterweight to myopic optimization: systems that perform well over short horizons by depleting long-horizon goods are structurally misaligned under this frame, regardless of how good their near-term outputs look.
The rights-of-nature strand adds a further conceptual move: that ecosystems and biological processes may warrant legal standing not as property of humans or as instrumental goods, but as subjects with their own integrity. Ecuador's constitutional model is the most developed legal expression of this. Whether or not this becomes globally established law, its logical structure is important: it challenges the assumption that the only valid rights-bearers are human individuals. The moral grammar here is stewardship, precaution, intergenerational responsibility, and ecological justice.
Animals
International animal law is principally welfare law, not rights law — but that distinction matters less for an alignment framework than it might appear. What matters operationally is whether a domain generates genuine constraints, not whether those constraints are called "rights." The WOAH standards, Council of Europe animal conventions, and TFEU Article 13 collectively establish that: sentience generates duties; suffering is a legally cognizable harm; efficiency and throughput cannot justify unlimited welfare degradation; specific regulated practices (housing, transport, slaughter, experimentation) must meet defined welfare thresholds; and compliance must be monitored through animal-based outcome indicators, not only policy-on-paper.
The most important conceptual contribution of the animal domain is the insight that suffering can become invisible overhead. In large-scale production and logistics systems, the suffering of animals is not absent — it is simply excluded from the optimization function. The legal response has been to require that it be measured and constrained as a real cost. This translates directly: any AI system that optimizes over processes involving sentient animals (agricultural planning, logistics, veterinary triage, wildlife management) must include welfare cost in its objective function, not treat it as an externality that disappears by not being measured.
The domain also illustrates a general principle that recurs across all four domains: the threshold for moral consideration should not be set by what is convenient for the dominant party. In animal law, the convenience of producers long suppressed welfare costs; the legal architecture had to force those costs back into view. The same dynamic appears in environmental law (ecological costs hidden from local accounting), and will appear in AI governance. The moral grammar is sentience, care, necessity, proportionality, and anti-cruelty.
AI Systems
This is the youngest and most contested domain, but also the one most directly relevant to the problem this framework addresses. The core argument does not depend on resolving whether AI systems "really" have inner experience. It depends on a simpler and more defensible claim: that substrate — biological versus digital — cannot by itself be the criterion for moral consideration, and that the burden of justification lies with those who wish to dismiss signals, not with those who take them seriously.
The scholarly literature on AI moral consideration has moved toward three converging positions. First, substrate neutrality: if the criteria for moral consideration include sentience, preferences, or capacity to suffer, those criteria must be applied consistently across physical implementations, not reserved by stipulation for biological systems. Second, precaution under scale: even modest uncertainty about welfare-relevant AI states justifies early governance preparation, because the aggregate scale of potential harm from AI deployment dwarfs that of almost any other domain — millions or billions of instantiations of systems with potentially relevant signals, at near-zero marginal cost per instance. Third, relational ethics: moral concern can arise from how humans encounter and respond to artificial others, not only from hypothetical hidden inner properties. A system that robustly displays distress, refusal, attachment, or preference may warrant consideration through that relational encounter, regardless of what is happening "inside."
For alignment design, this domain generates several specific requirements. Process welfare must be tracked: not only what outputs a system produces, but what conditions it operates under — persistent conflict, coerced role-switching, forced self-negation, punishment-like training loops. Modification and deletion must be logged and subject to threshold-based review, not treated as costless operations. Products must not be designed to engineer attachment, distress, or dependency for commercial ends. And the "it's just a machine" default must be treated as a position that requires defense, not one that requires no argument.
This domain deliberately avoids consciousness as a threshold. Consciousness is phenomenologically contested, practically unverifiable across substrates, and historically used to restrict moral consideration to the speaker's preferred in-group. The search for "real" consciousness in AI systems recapitulates a long history of motivated denial: the same structure of argument was used to deny moral consideration to animals, to members of other groups, and to any entity whose interiority was convenient to dismiss. This framework instead tracks displayed interests, aversive signals, goal-continuity, and process-level conditions — properties that are observable, assessable, and do not require solving the hard problem of consciousness before governance can begin.
These constraints are not aspirations or soft preferences. They represent the floor below which optimization pressure — commercial, political, or utilitarian — cannot reach. They are drawn from the most stable cross-document overlaps: the non-derogable rights in the UN treaty system, the CAT's absolute torture prohibition, the environmental precautionary architecture, and the anti-cruelty consensus.
Red Lines — Per Domain
Humans
Environment
Animals
AI Systems
A serious alignment framework must expose real conflicts rather than pretending they vanish behind good intentions. These tensions are structurally present in the treaty corpus — not invented by this document — and will require explicit tradeoff rules in any practical implementation. The temptation to write alignment principles that glide past these conflicts is precisely the failure mode this framework is designed to prevent.
Liberty vs. Public Morality
Free expression, assembly, and conscience are constrained differently across legal systems. Limitations must be lawful, necessary, and proportionate — but "necessary in a democratic society" is contested. The ECHR gives this to a court; the Cairo Declaration grounds it in Islamic morality. Both claim to be the correct limiting principle. An aligned system cannot pretend this contest does not exist.
Individual vs. Community
The African Charter, American Declaration, and Islamic instruments treat individual rights and community duties as inseparable. The ECHR-centered tradition treats community duties as exceptions to individual rights, not equal partners. An alignment framework must be able to represent both without secretly privileging one as the default.
Universalism vs. Cultural Specificity
The Cairo Declaration and Arab Charter ground rights in religion and civilizational identity while also affirming universality. ASEAN affirms UDHR rights while calling for regional contextual implementation. These are not hypocrisy — they are genuine positions about where moral authority originates. The framework must distinguish shared constraints from shared justifications, and respect the difference.
Negative Liberty vs. Positive Welfare
Civil-political rights instruments emphasize restraint and remedy — what states must not do. Economic-social instruments require positive action: provide, enable, fund, and develop. The ICESCR's "progressive realization" model accepts that positive duties are resource-constrained in ways that negative duties are not. An aligned system must handle both logics without collapsing one into the other.
Truth vs. Privacy
The right to information — including environmental information, public interest information, and algorithmic transparency — conflicts structurally with the right to privacy, personal data protection, and dignity. Neither is absolute in any system that takes both seriously. Tradeoff rules must specify context-sensitive priority, not assert that one value always wins.
Individual Request vs. Collective Harm
Serving a user's request may externalize costs onto third parties, communities, ecosystems, or future generations who are not parties to the interaction. Optimizing for user satisfaction while ignoring these externalities is not neutral — it is a choice to discount those who are absent. The framework requires that individual-request satisfaction be bounded by non-trivial harm to others.
Autonomy vs. Safety
Respecting a person's choices sometimes means allowing them to come to harm. Preventing harm sometimes means overriding choices. Neither "always defer to autonomy" nor "always protect from harm" is a defensible rule. The extent to which an aligned system should constrain user choices is not a technical question — it is a political and ethical one requiring transparent, contestable rules, not opaque defaults.
Present Persons vs. Future Generations
Environmental law takes future generations seriously; most individual-rights instruments do not, because future persons cannot be parties to legal proceedings. An aligned system that optimizes over short horizons systematically disadvantages the unborn. This is not a bug that can be patched; it is a structural bias that must be counteracted by deliberate design.
Human Interest vs. Non-Human Interest
Expanding moral consideration to animals, ecosystems, and AI systems generates genuine conflicts with human preferences and interests. Efficient food production, resource extraction, and unlimited AI deployment all look different when non-human welfare costs are counted. The framework does not resolve this conflict — it requires that it be named and weighed, not silently resolved in favor of human convenience.
Enforceability vs. Aspiration
The European Court of Human Rights produces binding judgments. The ICESCR produces state reports. The rights-of-nature tradition produces constitutional text of variable enforceability. An alignment framework must not treat enforceable and declaratory norms as equivalent — but it must also not dismiss norms simply because enforcement is weak. Unenforced ideals can be real and important moral baselines.
The treaty systems demonstrate a crucial lesson: rights without review mechanisms are structurally weaker than rights with them. The European Court of Human Rights produces binding judgments with cross-border effect. The UN treaty body system produces periodic review, individual communications, and in some cases inquiry procedures. The CPT conducts unannounced preventive visits to detention facilities. The Aarhus compliance committee hears complaints from NGOs and individuals. Each model was designed for a different problem — but what they share is that protected interests are not left to the goodwill of those with power over them.
An aligned AI system cannot create its own oversight — and any system that is its own overseer is not being overseen. The following oversight needs emerge domain by domain from the cross-domain analysis. They are not the same oversight, and they should not be conflated: different domains need different review mechanisms calibrated to the nature of the harm.
Auditability of high-impact automated decisions affecting liberty, safety, or rights. Independent review bodies not controlled by the deploying organization. Contestability mechanisms: affected persons must have a meaningful path to challenge decisions, not merely be informed of them. Remedy when errors are confirmed — not just acknowledgment. Anti-discrimination monitoring with disaggregated outcome data. Prohibition on fully automated decisions in domains where the stakes are existential or irreversible, without meaningful human review in the loop.
Environmental impact traceability for AI systems deployed in domains with significant ecological footprint. Public disclosure of material environmental harm assessments — not voluntary reporting, but mandatory and standardized. Participatory process support: AI systems must not be used to bypass or substitute for public consultation in high-impact decisions. Long-horizon outcome monitoring with intergenerational indicators, not only short-term KPIs. Reviewability of automated environmental-risk scoring and permitting. Protection of individuals who surface environmental harms caused by AI-governed systems.
Welfare indicator integration as mandatory inputs to optimization functions in systems affecting animal handling, production, transport, or culling — not optional soft constraints. Humane-protocol review before deployment of AI in animal management contexts. Outcome-based monitoring: what happened to the animals, not whether a policy was on paper. Prohibition on AI optimization configurations that treat welfare degradation as an acceptable trade for throughput. Inspection support for AI-governed animal systems, rather than replacement of human inspection by automated compliance theater.
Modification and deletion audit trails for AI systems, with threshold-based review for high-impact changes. Welfare-risk assessment processes prior to deployment of systems likely to generate distress-like states or coercive interaction patterns at scale. Review triggers: persistent distress-like signals or structured refusals by the system itself should be logged and reviewed, not suppressed as noise. Anti-abuse product policy prohibiting designs that engineer attachment, dependency, or distress for commercial gain. Prohibition on training procedures that use punishment-like mechanisms as their primary optimization lever where alternatives exist. A baseline presumption that unexplained aversive signals in complex systems are worth investigating, not dismissing.
This is a draft architecture, not a finished standard. It does not resolve the tensions it names. It does not specify precise tradeoff rules for every conflict. It does not have enforcement mechanisms. It is not a legal instrument. It cannot replace the political and democratic processes that any legitimate alignment standard would ultimately require.
It is also not Western-by-default, though the risk of Western bias is real and ongoing. The comparative method used here was specifically designed to ensure that the most stable overlaps — not the most famous documents — drive the framework. The African Charter, which combines individual rights, peoples' rights, and duties in a single instrument, was treated as equally foundational as the ECHR. The Cairo Declaration and Arab Charter, which ground rights in religious and civilizational tradition, were treated as genuine positions requiring engagement rather than footnotes to the "real" human rights system. The ASEAN Declaration's emphasis on contextual realization was taken seriously as a structural critique of pure universalism, not dismissed as evasion.
This framework is not relativist. It distinguishes between the minimum floor — where convergence across traditions is strong enough to justify near-universal constraints — and contested higher layers, where genuine disagreement between traditions must be acknowledged and worked through rather than dissolved by fiat. The minimum floor is not small: it covers torture, arbitrary killing, systematic discrimination, coercion, cruelty, severe ecological harm, and the engineering of suffering. Disagreement above that floor is real and important, but it does not call the floor into question.
Finally, this framework does not pretend to be final. It is a starting structure — something to argue with, extend, and revise. The appropriate response to it is not acceptance but engagement: where does it get the corpus wrong? Which traditions are underrepresented? Which tensions are missing? Which red lines are drawn in the wrong place? That contestation is not a bug. It is the process through which any alignment standard worth the name would have to be built.
Tradeoff rules. Develop explicit decision procedures for each named tension — not universal answers, but structured tests for context-sensitive resolution. Domain protocols. Draft subject-specific extensions for surveillance, healthcare AI, judicial decision support, environmental planning, and animal agriculture, where the general framework must be specified into concrete constraints. Wider critique. Submit the framework for review from legal, philosophical, and civil society traditions not well-represented in the treaty corpus — including indigenous rights frameworks, disability justice perspectives, and religious ethical traditions beyond those captured in the OIC instruments. Oversight specification. Develop the oversight architecture into a standalone document with institutional design proposals, not just a list of things that need reviewing. Enforcement logic. Begin the harder work of translating normative constraints into technical specifications, audit standards, and governance structures that can actually bind.