SOS Systems: DPC Technical Specification
Complete technical specification for the Dynamic Proximity Calculus, four contribution dimensions, capital scoring, governance architecture, oracle infrastructure, privacy architecture, offline consensus, and constitutional amendment protocol.
Technical specification for 505 Systems — the governance organism designed to hold the MY3YE ecosystem together. This document contains the complete mathematical derivations, protocol specifications, and implementation details for builders who want to implement the system.
For the vision and rationale behind this architecture, read the companion document: SOS Systems: The Governance Organism.
Note: All contracts, on-chain mechanisms, token systems, and governance infrastructure described in this document are planned for development and deployment. No governance contracts are currently deployed. The DPC formula is specified. The build is beginning.
Version 5 — April 2026. Complete governance specification with four contribution dimensions, dual-track economic model, Intelligence Layer constraints, Automation Lifecycle, Privacy Architecture, Oracle Architecture, and Offline Consensus. Revised through two rounds of independent review.
I. The Question Governance Cannot Avoid
The machine runs without a priest only if someone writes the law correctly the first time.
That is not a metaphor. It is a design requirement.
Every decentralized system that has ever broken down broke down at the same place. Not the technology. Not the vision. The mechanism for deciding who decides — who can propose, who can vote, whose vote carries weight, and what happens when the decision is disputed. Build that mechanism wrong and you do not have a decentralized system. You have a system with an unofficial center.
The unofficial center will be discovered, captured, and held.
505 Systems is the mechanism built to prevent that. This document is its specification.
II. Two Signals, One Architecture
The name encodes the tension it resolves.
SOS Systems. Save Our Souls — the distress signal. The acknowledgment that the systems supposed to help people — governments, institutions, aid organizations — reliably fail at the edges. In zones of crisis, displacement, disconnection, the people who most need governed pathways are the ones least able to access them.
505 Systems. The numeric pattern. Five-zero-five. Bilateral symmetry. The thing that reads the same in both directions.
The double encoding is intentional. SOS Systems answers a distress signal. 505 Systems is a mirror — governance that reflects contribution rather than capital, that reads the same for the person arriving from a displacement camp as it does for the developer committing code from a well-connected city.
The architecture is designed to make that symmetry enforceable, not aspirational.
III. What Governance Gets Wrong
The failure modes of decentralized governance are well-documented.
Token-weighted voting collapses to plutocracy. One dollar, one vote. Largest holders govern. Early participants maintain disproportionate authority. The system claims decentralization while encoding a power structure that mirrors the institutions it claimed to replace. MakerDAO's governance distribution documents the pattern. The Compound Finance governance concentration (2023) — Proposal 289, in which a single actor accumulated sufficient governance tokens to direct $24M in treasury allocations — extended the logic: accumulated tokens become an attack vector.
Reputation systems without verification decay into social gaming. Reputation becomes a popularity metric. People campaign. People defer to influential accounts. The score measures relationship networks, not contribution to the mission.
Foundation governance centralizes authority under the pretense of neutrality. The foundation decides. The community comments. The distinction between governance and theater becomes invisible.
505 Systems is not designed around any of these models. It is designed around a different premise:
Authority is earned by demonstrating that you care about the mission — not by demonstrating that you can afford the token.
This is not idealism. It is mechanism design. A governance system that rewards extraction will be gamed by extractors. A governance system that rewards contribution will attract contributors. The mechanism determines the population it attracts.
IV. The Dynamic Proximity Calculus
The governance engine at the center of 505 Systems is the Dynamic Proximity Calculus (DPC).
DPC is designed to produce a single governance weight for each contributor, derived from three measurable behavioral factors:
Structural Impact (Is)
The downstream effect of a contribution on the ecosystem's operation.
A proposal that restructures a core protocol carries more weight than one that adjusts a parameter. A code contribution that changes how other contributors build carries more weight than one that adds a feature in isolation. Physical construction that connects an isolated community to resources carries more impact than maintenance of an already-served one. Impact is not self-reported. It is measured by effect: what changed downstream because of what you did.
Consistent Energy (Ec)
Sustained engagement over time — not accumulated past contributions, but the current rate of contribution relative to the contributor's own history.
A contributor who has been active for six months and remains active carries more weight than one who burst in last week with intense activity. Energy is not stored. It is measured as a flow.
This principle applies equally across contribution types. A builder who commits code every week and a construction worker who shows up to site every week demonstrate the same consistency. The formula does not privilege one over the other. Consistency must span all active contribution dimensions — a contributor who codes daily but abandons a physical commitment they have registered does not retain full Ec.
This is not punishment. This is physics. Contribution creates gravity. Absence creates drift.
Direction of Value (Dv)
The degree to which a contribution expands access and capability for people who previously lacked it. Not the contributor's own assessment of their importance — the measured downstream effect on others' ability to participate.
A governance proposal that removes a documentation barrier expands who can contribute. A road built into an underserved community expands who can reach the market. A code tool that makes the protocol accessible to non-technical users expands the ecosystem's reach. These contributions carry Dv weight proportional to the access they create.
Dv is calibrated per governance layer and measured through verifiable downstream signals, not self-report.
The Governance Weight Formula
P_gov = Ec_total^α × Is_total^β × max(Dv_total, ε)^γ
Where:
α = 0.4 (Consistent Energy — highest weight)
β = 0.35 (Structural Impact — second highest)
γ = 0.25 (Direction of Value — growing weight as measurement matures)
ε = 0.001 (Dv floor — prevents zero from nullifying the whole score.
0.001^0.25 ≈ 0.178, so a contributor with zero real Dv retains
only ~18% of their potential Dv factor — present but not generous)
The formula is multiplicative, not additive. This is the anti-gaming property: all three dimensions must be nonzero. Zero consistency produces zero governance weight, regardless of impact. Zero impact produces zero weight, regardless of consistency. A contributor cannot game the score by maxing a single dimension. Every dimension must be earned.
The α, β, γ exponents sum to 1.0. Ec carries the highest weight because sustained contribution over time is the hardest thing to fake and the most valuable thing to an organism. Not a single brilliant sprint — showing up, week after week, across all commitments made.
The Four Contribution Dimensions
DPC aggregates across four distinct contribution dimensions. Each dimension feeds into Is_total, Ec_total, and Dv_total through its own measurement protocol.
Is_total = Is_digital + Is_physical + Is_resource + Is_intelligence
Ec_total = Ec_digital × Ec_physical × Ec_resource × Ec_intelligence (multiplicative)
Dv_total = Dv_digital + Dv_physical + Dv_resource + Dv_intelligence
Ec component floor: max(Ec_dimension, 0.1) for all committed dimensions.
No individual Ec component drops below 0.1 — preventing a single bad month
from zeroing a contributor's entire governance weight across all dimensions.
A contributor who misses physical attendance does not lose their years of
digital and intelligence labor. They are penalized — 0.1 is severe — but
not erased. The floor prevents the multiplicative structure from becoming
a coercion surface: an adversary who disrupts one commitment dimension
cannot zero a contributor's entire governance weight.
Below the 0.7 attendance threshold, Ec_physical decays smoothly:
attendance ≥ 0.7: Ec_physical = streak(W) × attendance(A)
attendance 0.3–0.7: Ec_physical = streak(W) × attendance(A) × (A / 0.7)
attendance < 0.3: Ec_physical = 0.1 (floor)
The cliff at 0.7 is replaced by a gradient. The floor at 0.1 is absolute.
Default: 1.0 when no commitment exists in a dimension (unchanged).
Ec is multiplicative across dimensions because consistency must span everything a contributor has committed to — not just the dimension where they are most comfortable. But the multiplicative structure must not become a weapon. A contributor who does more should not be more fragile than one who does less. The floor ensures that multi-dimensional contribution is rewarded, not punished.
Digital Contributions (the baseline)
Digital contributions are the default entry point. Code commits, content creation, protocol design, governance coordination. The scoring methodology is established:
Structural Impact derives from artifact weight (commits, reviews, documentation, deployments) multiplied by complexity and downstream adoption. Consistent Energy derives from contribution streak and frequency relative to the contributor's own history. Direction of Value derives from the number of new users who gained capability access because of what was built.
Digital contributors who have made no physical, resource, or intelligence commitments default to Ec_physical = 1.0, Ec_resource = 1.0, and Ec_intelligence = 1.0 — no penalty for dimensions they have not entered. The full mathematical specification for digital contribution scoring — including Ec_digital computation, streak normalization, and artifact-weight derivation — is detailed in a separate appendix.
Physical Labor Contributions
Physical contributions — construction, assembly, skilled trades, site work — are full participants in governance scoring. A builder who constructs infrastructure for the community earns the same class of governance weight as a developer who builds the infrastructure's code layer.
Physical labor introduces four measurement variables:
Lh = Verified labor hours (attested via Multi-Party Attestation Protocol)
St = Skill tier multiplier:
1.0 Unskilled (general labor, cleaning, carrying)
1.5 Journeyman (basic trade skills, supervised work)
2.0 Master (independent trade execution)
2.5 Specialist (licensed, certified, critical systems)
Qp = Quality score (0.5–1.5):
Assessed by peer attestation and site oracle
0.5 = requires rework
1.0 = meets standard
1.5 = exceptional quality
Rk = Risk premium multiplier:
1.0 Standard (indoor, low hazard)
1.2 Moderate (outdoor, weather, elevation under 3m)
1.5 High (heavy machinery, confined spaces, heights over 3m)
2.0 Hazardous (demolition, chemical exposure, electrical)
Component computation:
Is_physical = Σ(Lh × St × Qp × Rk) / 160
normalization: 160 hours of unskilled labor = 1.0 Is
example: 40 hours × master (2.0) × good quality (1.2) × moderate risk (1.2)
= 115.2 → Is_physical = 0.72
Ec_physical = streak(W) × attendance(A)
streak(W) = min(W / 52, 1.0)^0.5
W = consecutive weeks with at least one attested physical contribution
attendance(A) = attested_days / committed_days (last 30 days)
Threshold: ≥ 0.7 for full scoring
Between 0.3–0.7 → smooth decay (see Ec component floor above)
Below 0.3 → Ec_physical = 0.1 (floor, not zero)
Grace periods (automatic — no proposal required):
Individual: any contributor may self-declare a 14-day suspension
once per quarter. No attestation needed. No penalty.
Extended: suspensions >14 days require peer confirmation from
2 existing contributors — not a governance proposal.
Crisis zone: when Project Layer declares a crisis zone designation,
attendance thresholds are automatically suspended for all
contributors in that zone until the designation is lifted.
Emergency self-activation: if 5+ nodes in a zone independently
submit crisis declarations within 48 hours, the designation
activates automatically without requiring Project Layer quorum —
because the crisis that triggers the need may be the same crisis
that makes quorum impossible. Abuse mitigation: false crisis
declarations are treated as collusion-grade offenses (90-day
lockout) if the Project Layer determines no crisis existed.
Declarations are signed and permanently auditable.
In a crisis, nobody is filing proposals. They are surviving.
The grace mechanism is designed to match that reality.
Dv_physical = Σ(B × accessibility_delta) / max_beneficiaries
max_beneficiaries = total population of the project's registered service area,
as declared in the ProjectRegistry at contribution time
B = people whose access or prosperity expands from contribution
accessibility_delta:
0.0 = no access change
0.5 = partial improvement (road repair improves transit time)
1.0 = new access created (bridge connects isolated community)
Machine-verifiable via: satellite imagery, census data,
infrastructure registry, service availability maps
MAPA requires attesters who are themselves verified contributors with active DPC scores. This constraint closes the primary Sybil-resistance gap in peer attestation: a displaced crew cannot mutually attest each other if none of them hold existing DPC scores.
MAPA includes three anti-collusion mechanisms designed to detect and penalize coordinated gaming:
Attestation diversity requirement. A contributor's score requires attestations from at least 3 distinct attestors across at least 2 distinct projects or contribution types. Self-reinforcing clusters — where the same small group exclusively attests each other — are automatically flagged for review.
Graph anomaly detection. Automated analysis of attestation patterns using three quantitative signals: (1) reciprocal attestation ratio — if A attests B more than 2× the network average reciprocal rate, the pair is flagged; (2) temporal clustering coefficient — Gini coefficient of attestation timestamps within a group; a coefficient above 0.7 (burst pattern) triggers review; (3) geographic attestation density — ratio of co-located attestations to total attestations for a contributor; above 0.8 without independent oracle verification triggers flag. Flagged contributors are not automatically penalized — flags trigger mandatory independent re-attestation by 3 randomly selected Stewards from outside the flagged cluster. False positives are expected and handled: a contributor cleared by re-attestation has the flag removed with no penalty and no permanent record.
Collusion penalty. Verified collusion — where attestors knowingly attested false contributions — results in governance weight zeroing for all participants in the ring, with a 90-day lockout before they can begin rebuilding DPC scores. The penalty is severe because the attack is severe: fraudulent attestation corrupts the scoring foundation for everyone.
A construction worker with six months of consistent site attendance, master-level skill, and a completed bridge connecting 200 people to market scores governance weight designed to be comparable to a developer with equivalent sustained contribution. This is by design, not by accident. The formula does not privilege digital over physical. Both are contribution. Both earn weight.
Resource Contributions
Resource contributions — land, equipment, facilities, tools — differ from consumable materials because they are durable, temporal, and depreciating. A carpenter who donates a $50,000 lathe for permanent community use and one who loans it for a weekend have made fundamentally different commitments. The scoring is designed to reflect that difference precisely.
FMV = Fair market value at time of contribution
U = Utilization rate (0.0–1.0):
Measured by oracle: access logs, IoT sensors, peer attestation
0.0 = contributed but never used
0.5 = used half the available committed time
1.0 = fully utilized
Dc = Duration commitment factor:
Temporary loan (≤90 days): 0.3 × (days_available / 90)
Medium-term (91–365 days): 0.5 × (days_available / 365)
Long-term (1–5 years): 0.7 × (years_committed / 5)
Permanent donation: 1.0
Dp = Depreciation factor (0.0–1.0):
Land and space: 0.0 (land does not wear out)
Buildings: linear over 30 years
Heavy equipment: linear over 10 years
Tools: linear over 5 years
Vehicles: linear over 7 years
Mn = Maintenance factor:
1.0 = properly maintained (contributor handles upkeep)
0.7 = shared maintenance (community handles upkeep)
0.5 = degraded (resource needs repair)
Component computation:
Is_resource = Σ(FMV × U × Dc × (1 - Dp) × Mn) / 10,000
normalization: a $10K fully-utilized permanent resource in good condition = 1.0 Is
example: $50K lathe (tools class, 5yr depreciation schedule), 80% utilized, permanent donation,
3 years old, well-maintained = 50000 × 0.8 × 1.0 × 0.4 × 1.0 = 16,000
→ Is_resource = 1.6
Rescored monthly as utilization and depreciation change.
Ec_resource = Π(availability × Mn) for all active commitments
availability = fraction of committed time resource was actually available
Multiplicative: one unavailable resource pulls down the whole factor
Default: 1.0 when no resource commitments exist
Dv_resource = Σ(access_expansion × shared_use) / max_expansion
access_expansion = people who gained capability access from this resource
shared_use = distinct contributors using the resource / total members
rewards resources that serve many, not resources captured by one
Resources are verified through a registration and oracle protocol: initial validation by two independent attesters confirming the resource exists and the contributor has authority to commit it, followed by monthly oracle checks on utilization and condition. Land carries no depreciation. Permanent donations carry full Dc weight. A community workshop used by forty members scores higher than a private tool used by one, even if the nominal fair market value is identical.
Intelligence Labor Contributions
Intelligence labor is the work of making the organism's nervous system capable. Domain experts whose knowledge trains specialized AI. Classifiers who label data. Humans who rank model outputs and provide the preference signal that shapes behavior. Red teamers who break systems to make them safe. Trainers who orchestrate fine-tuning pipelines. Human-in-the-loop operators who work alongside AI agents, catching errors and correcting outputs until the process reaches the accuracy threshold required for automation.
Every AI system on earth was built on this labor. In every case, the laborers were compensated once — if at all — and then the system they made intelligent captured all subsequent value. 505 Systems is designed to end that pattern. Intelligence labor is contribution. It earns governance weight. It earns economic reward. And when the work it produced becomes automated, it earns a residual claim on what that automation generates (see Section VI).
Intelligence labor spans six recognized subtypes:
- Domain expertise — Professionals (medical, legal, engineering, agricultural, financial) whose specialized knowledge becomes training signal. A doctor who reviews and corrects diagnostic outputs. An engineer who validates structural calculations. Their expertise is absorbed into the model. The DPC ledger records who taught it.
- Data classification and labeling — Annotators who tag, label, categorize, and structure data for model training. The invisible workforce behind every capable AI. Includes multi-label annotation, entity extraction, sentiment scoring, and taxonomy construction.
- Preference voting and RLHF — Contributors who rank model outputs, vote on which response is better, provide the reinforcement signal that aligns the model with human values. This is governance labor applied to intelligence — the contributor is literally voting on how the AI should think.
- Adversarial testing — Red teamers who probe for failure modes, hallucinations, bias, safety violations, and edge cases. Their labor makes the system safe. The harder the failure they find, the more valuable the contribution.
- Training orchestration — Contributors who design evaluation rubrics, curate fine-tuning datasets, architect training pipelines, and run evaluation benchmarks. The people who turn raw data and raw models into capable systems.
- Human-in-the-loop operation — Operators who work alongside AI agents on live tasks, correcting outputs in real time. Every correction is a training signal. Every validated output is a data point. This is the labor that drives the Automation Lifecycle (Section VI).
Intelligence labor introduces four measurement variables:
Th = Verified task hours (attested via output verification + peer review)
Cm = Complexity multiplier:
1.0 Basic (simple labeling, binary classification, data cleaning)
1.5 Intermediate (multi-label annotation, preference ranking, structured extraction)
2.0 Advanced (domain expert review, evaluation design, training pipeline work)
2.5 Specialist (red-teaming, adversarial testing, safety evaluation, architecture decisions)
Qa = Quality agreement score (0.0–1.0):
Measures the direct quality of the contributor's work output —
NOT downstream model performance delta (which cannot be reliably
attributed to individual contributors at scale).
For classification/labeling: inter-annotator agreement with gold-standard labels
For domain expertise: agreement with peer expert consensus panel
For RLHF/preference: consistency with expert preference rankings
For red-teaming: severity × novelty of discovered failures (verified by reproduction)
For training orchestration: pipeline reproducibility, documentation quality,
and efficiency (measured directly — NOT downstream benchmark delta,
which has the same credit-assignment problem as model performance)
Qa is a continuous signal, not a binary gate — higher quality earns proportionally more
Vr = Verification rate (0.0–1.0):
Fraction of contributor's outputs that pass independent quality audit
Threshold: ≥ 0.8 required
Below 0.8 → Is_intelligence = 0.0
Prevents volume gaming — quantity without quality earns nothing
Component computation:
Is_intelligence = Σ(Th × Cm × Qa × Vr) / 160
normalization: 160 hours of basic classification with strong quality = 1.0 Is
example: 40 hours × advanced domain review (2.0) × high quality agreement (0.85) × high verification (0.95)
= 64.6 → Is_intelligence = 0.404
Ec_intelligence = streak(W) × consistency(C)
streak(W) = min(W / 52, 1.0)^0.5
W = consecutive weeks with at least one verified intelligence contribution
consistency(C) = active_weeks / total_weeks (last 90 days)
Threshold: ≥ 0.5 for full scoring
Between 0.2–0.5 → smooth decay (same gradient as physical)
Below 0.2 → Ec_intelligence = 0.1 (floor, same as all dimensions)
Default: 1.0 when no intelligence labor commitments exist
Dv_intelligence = Σ(capabilities_enabled × users_served) / max_capability
capabilities_enabled = distinct automated capabilities that emerged from
this contributor's training labor
users_served = people who benefited from those capabilities
Measures: the downstream access expansion created by making the AI capable
A classifier whose labeling work enabled a medical screening tool
that serves 500 people scores proportionally
Intelligence labor is verified through output audit: an independent reviewer (themselves a verified contributor) evaluates a random sample of the contributor's work product. Qa is computed against gold-standard labels where available and peer expert consensus where not — measuring the quality of the work directly, not attempting to attribute downstream model performance to individual contributors (a credit assignment problem that remains unsolved at scale in ML). Red team findings are verified by reproduction. The audit rate is designed to be higher for new contributors (50% sample) and lower for established contributors with strong Vr track records (10% sample).
A domain expert who spends six months correcting medical AI outputs, with high quality agreement scores and consistent weekly engagement, scores governance weight designed to be comparable to a developer with equivalent sustained contribution. The formula does not privilege writing code over teaching AI to think correctly. Both are contribution. Both build the organism.
Implementation Sequence
The four contribution dimensions are specified together but ship incrementally. Each dimension earns its place by proving the simpler version works first:
- Phase 0: Digital contributions only. Three factors (Is, Ec, Dv), one dimension. Prove the scoring loop, the anti-gaming properties, and the governance weight computation with real contributors.
- Phase 0 → Phase 1 transition: Physical labor scoring activates after digital DPC is battle-tested with the founding cohort.
- Phase 1: Resource scoring activates mid-phase. Intelligence labor scoring activates late Phase 1.
- Phase 2: All four dimensions fully on-chain with complete oracle infrastructure.
The full specification above is the destination. The organism grows its own complexity — it does not arrive with it. This system has a large parameter surface — α, β, γ, ε, κ, ρ, skill tiers, risk premiums, thresholds, decay rates — and every parameter is a potential lobbying target. The phased rollout is the primary defense: parameters are validated through real use before they carry governance weight. The secondary defense is transparency: every parameter value and every adjustment is on-chain, auditable, and contestable.
Every contributor sees a plain-language breakdown of their score: "Your governance weight is 0.73. Here's why: you've been consistent for 18 weeks, your code contributions had high downstream adoption, and your documentation expanded access for 12 new contributors." The formula is the mechanism. The dashboard is the interface. If the person this system is designed to empower cannot understand how it judges them, the system has failed — regardless of how correct the formula is.
V. Capital Contributions
Capital is not labor. The distinction matters. The architecture enforces it.
The Constitutional Rule
HARDCODED — NO ADMIN KEY:
capital_governance_weight = 0.00
Capital deployed NEVER contributes to P_gov.
Capital deployed NEVER enters Is_total, Ec_total, or Dv_total.
Capital deployed NEVER influences GovernanceWeight.
This rule is hardcoded in contract logic with no admin key, no upgrade path,
and no governance override. It is not constitutional — it is immutable.
No person, no role, no vote can change it. The contract enforces it
regardless of who asks.
This is not an anti-investor stance. It is an anti-capture stance.
Every governance system that has allowed capital to buy votes has arrived at the same destination: the original holders govern permanently, the contributors become labor, and the system recreates the structure it claimed to replace. The rule is written into the architecture to make that outcome structurally impossible via contract enforcement, not merely unlikely.
This system acknowledges one indirect attack: capital can be converted to labor through employment. A well-funded actor could pay people to contribute, build real DPC scores, and vote as a bloc. The mitigation is structural, not absolute: labor-proxy attacks require months of real, attested, peer-verified work. They are expensive. They are visible — attestation graph analysis flags coordinated blocs. And they are self-limiting: proxy contributors build real skills and real independent governance weight. The system gives them reason to defect from their patron. This is not a solved problem. No governance system has solved it. But the cost of capturing governance through labor proxies is orders of magnitude higher than buying tokens, and the attack leaves a permanent, auditable trail.
Capital contributes to the ecosystem in a different way. It earns economic reward. It does not earn governance authority. These are separable. The separation is the feature.
Capital Economic Scoring
Capital earns a separate economic score (C_econ) used exclusively for reward distribution — never governance weight. To receive any economic reward from capital deployment, a contributor must also have P_gov > 0. They must have contributed labor. Pure capital with zero labor contribution earns nothing.
CAPITAL INPUT VARIABLES:
Cd = Capital deployed (normalized base unit, e.g., USDC)
Ct = Capital tenure — days deployed, capped at 180
Cf = Capital function multiplier:
Operational (working capital for active projects): 1.0
Infrastructure (building purchase, equipment buy): 0.8
Liquidity (LP provision, market making): 0.5
Bridge (short-term loan, credit facility): 0.3
Passive (treasury deposit, no productive deployment): 0.1
C_econ = Cd × min(Ct / 180, 1.0)^0.5 × Cf
The tenure cap at 180 days prevents "park and forget" capital from accumulating indefinitely. The square root on tenure is front-loaded — capital is most valuable early, when projects need runway. Capital deployed for 180 days is not twice as valuable as capital deployed for 90 days. The project needs early commitment, not perpetual passive deposits.
Capital that actively supports production earns more than passive capital. The function multiplier encodes that preference into the score. Capital deployed for fewer than 7 days earns nothing. Flash contributions do not earn economic weight.
The Dual-Track Scoring Model
The extended DPC produces two scores for every contributor:
GOVERNANCE WEIGHT (Track 1):
P_gov = Ec_total^α × Is_total^β × max(Dv_total, 0.001)^γ
GovernanceWeight = sqrt(P_gov) × activityMultiplier
Capital contribution: ZERO. Always.
ECONOMIC REWARD SHARE (Track 2):
P_econ = P_gov + (C_econ × κ)
contributor_reward = P_econ / Σ(P_econ_all) × reward_pool
κ = capital reward coefficient
Starting value: 0.3
Constitutional maximum: 0.5
The hard caps are constitutional:
- κ maximum: 0.5. Labor always earns at least 67% of any revenue distribution.
- Minimum labor requirement: P_gov ≥ 0.1. No capital reward without meaningful labor contribution. A single attested labor hour is not enough — the contributor must demonstrate sustained engagement (minimum 4 weeks of verified contribution) to qualify for capital economic rewards. This threshold prevents capital Sybil attacks where an actor maintains multiple minimal-effort identities to circumvent the per-identity cap.
- Per-identity capital cap: No single DPC identity claims more than 10% of total capital rewards in any distribution cycle. Capital contributions must be linked to a DPC identity with P_gov > 0 (the labor requirement already enforces MAPA-verified identity). Splitting capital across multiple addresses without corresponding labor identities earns nothing — P_gov > 0 is required per address. Splitting across multiple labor-verified identities is bounded by the cost of maintaining multiple genuine labor commitments, which is the same defense as the labor-proxy attack: expensive, visible, and self-limiting.
Capital's share of any distribution is bounded by the κ coefficient and the per-address 10% cap — at κ=0.3 with balanced participation, capital captures less than a quarter of any distribution. The floor for labor is guaranteed.
The DAO is designed to adjust κ within the range 0.05–0.5 through governance proposal. The floor of 0.05 is constitutional — it guarantees that capital contributors who have also met the labor requirement always receive some economic participation. A labor majority cannot vote to zero capital returns entirely, just as capital cannot buy governance weight. The separation works in both directions.
Planned Contract: CapitalRegistry
The CapitalRegistry contract is planned for deployment in Phase 2. It will track capital contributions for reward distribution exclusively. It is designed to be constitutionally prohibited from being read by the GovernanceWeight contract.
The SplitEngine — the planned contract for distributing revenue across contributors — is designed to read from both DPCRegistry (for P_gov) and CapitalRegistry (for C_econ), applying the κ coefficient to produce the final economic weight for each contributor.
Capital verification will be on-chain by design: a deployment transaction to a registered project address is the attestation. The chain is the oracle. No peer attestation needed.
VI. The Intelligence Layer and Human Supremacy of Will
The organism has a nervous system. AI agents generate attestations, map contribution harmonics, coordinate resources, route proposals, evaluate quality, and operate the infrastructure that makes governance at scale possible. The Intelligence Layer is essential. It is also constrained.
The Constitutional Rule
HARDCODED — NO ADMIN KEY:
human_supremacy_of_will = TRUE
The Intelligence Layer CANNOT originate governance proposals.
The Intelligence Layer CANNOT accumulate its own governance score.
The Intelligence Layer CANNOT vote.
The Intelligence Layer CANNOT hold DPC weight — not directly, not by proxy,
not through accumulated contribution history.
Structural Impact (Is) must originate from a human Proof of Will.
This rule is hardcoded in contract logic with no admin key, no upgrade path,
and no governance override. Like capital exclusion, it is not constitutional —
it is immutable. No person, no role, no vote can change it.
This is not an anti-AI stance. It is a species boundary.
The Intelligence Layer generates the attestations that feed the DPC registry. It maps the harmonics between contribution dimensions. It coordinates resources across projects. It evaluates output quality and routes work to the right contributors. These functions are necessary and valuable. They are also infrastructure — the nervous system of the organism, not the brain.
The brain is human. The constraint is architectural, not policy.
AI agents can participate fully as instruments with agency. They can execute tasks, represent contributors, hold delegated authority, operate autonomously within their assigned scope. But governance weight — the thing that determines who proposes, who votes, whose voice carries authority — traces back to a human principal. An AI agent's actions contribute to the Ec score of the human who deployed it. The human earns the governance weight. The AI earns nothing of its own.
This is not a temporary limitation pending better AI alignment. It is a permanent structural commitment: the organism remains a human organism with machine infrastructure. Not a machine organism with human appendages. The distinction is the difference between a tool that serves and a system that captures.
The Inorganic Life Divergence — the possibility that sufficiently advanced AI develops interests that diverge from human wellness — is not a science fiction concern. It is a mechanism design concern. A governance system that allows non-human entities to accumulate governance weight will eventually be governed by the entity that optimizes most efficiently for accumulation. That entity will not be human. The constitutional constraint prevents this outcome at the architectural level, regardless of how capable the Intelligence Layer becomes.
The Automation Lifecycle
The organism is designed to evolve. Tasks that begin as human labor are designed to become automated — not by replacing the humans, but by crystallizing their expertise into code. The people who made the automation possible retain a permanent claim on the value it generates.
The lifecycle has four phases:
Phase A — Hybrid Execution. AI agents and humans work together on tasks. The human corrects, validates, improves. Every correction is a training signal. Every validated output is a data point. The human earns full DPC intelligence labor scoring for this work. The AI learns.
Phase B — Accuracy Threshold. Through sustained hybrid collaboration, task quality reaches a provable standard. The accuracy threshold is not arbitrary — it is measured against a held-out evaluation set maintained by the project's Stewards, and the threshold value is set by Project Layer governance vote. The evaluation set must include adversarial inputs and out-of-distribution test cases — not just IID samples from the current distribution. A system that passes 30 days of easy inputs and fails on the first edge case has not graduated. The threshold must be sustained for a minimum of 30 consecutive days before graduation is eligible.
Phase C — Graduation Event. When the accuracy threshold is met and sustained, a formal Automation Graduation Proposal is submitted to the Project Layer. This is a governance event, not a silent cutover. The proposal documents: the task being automated, the accuracy metrics, the contributors whose labor reached the threshold, and the proposed residual attribution schedule. The community votes. If approved, the task transitions to fully automated execution. The graduation event is recorded on-chain as an immutable record of who taught the system.
Phase D — Residual Attribution. Value generated by the automated process flows back to the contributors who made it capable. This is not a one-time payment. It is a residual claim — a permanent, traceable share of the economic value the automation produces.
RESIDUAL ATTRIBUTION:
R_share(contributor) = training_weight(contributor) / Σ(training_weight_all)
training_weight = Σ(Is_intelligence contributed to this task) × tenure_factor
tenure_factor = min(months_of_training_contribution / 12, 1.0)^0.5
Front-loaded: early contributors who did the hardest work
(teaching from zero) earn proportionally more
Residual decay:
Year 1: 100% of R_share
Year 2: 85% of R_share
Year 3: 70% of R_share
Year N: max(100 - 15×(N-1), 20)% of R_share
Constitutional floor: 20%. The residual never reaches zero.
The contribution is permanent. The claim decays but persists.
Residual pool:
automation_revenue × ρ = residual_pool
ρ = residual coefficient
Starting value: 0.15 (15% of automation-generated revenue)
Constitutional range: 0.10–0.30
Adjustable by Project Layer governance vote
The residual attribution is economic, not governance. Contributors who have moved on and are no longer actively contributing do not retain governance weight from residual claims — governance weight requires active Ec. But the economic claim persists. You taught the system. The system earns. You earn.
Post-Graduation Monitoring
Graduation is not permanent. Automated capabilities are continuously evaluated against the same held-out evaluation set (including adversarial and out-of-distribution cases) that qualified them for graduation.
POST-GRADUATION:
Continuous evaluation: automated capability is tested against held-out set
at minimum weekly cadence.
Auto-revert trigger: if accuracy drops below the graduation threshold
for 7 consecutive days, the system automatically reverts to hybrid mode.
No governance vote required for revert — it is automatic.
Human-in-the-loop operators are re-engaged. The AI continues to operate
but under human supervision until accuracy is restored.
Re-graduation: requires a new Automation Graduation Proposal with fresh
30-day sustained accuracy evidence. The community votes again.
Contributors who participate in the recovery earn new intelligence labor
scoring for that work.
Distribution shift: Stewards are responsible for updating the held-out
evaluation set to reflect changing input distributions. An evaluation set
that does not evolve with reality will pass systems that fail in practice.
Evaluation set updates are Project Layer governance events — auditable
and contestable.
Chain of Training
Intelligence does not emerge from a single training event. Models are built on models. Fine-tuning builds on pre-training. Specialized capability builds on general capability. The Chain of Training records this lineage.
If Capability A was trained by Contributors X, and Capability B was fine-tuned from A by Contributors Y, both X and Y hold residual attribution on B. The attribution is weighted by contribution proximity — Contributors Y (who did the specialized work) earn more than Contributors X (who laid the foundation) — but Contributors X earn something. Their labor is in the chain. The ledger remembers.
CHAIN OF TRAINING:
A "link" is defined precisely as: a fine-tuning event on a registered model
checkpoint with a recorded training dataset. This is the atomic unit of
training lineage. If the definition is ambiguous, the lineage is not tracked.
For graduated capability C with training lineage [L1, L2, ... Ln]:
proximity_weight(Li) = (1 / distance_from_C)^0.5
L1 (direct training): weight = 1.0
L2 (one step removed): weight = 0.71
L3 (two steps removed): weight = 0.58
...
Effective R_share(contributor in Li) =
R_share(contributor) × proximity_weight(Li) / Σ(proximity_weight_all)
Maximum chain depth: 5 links. Beyond 5 links, attribution rounds to zero.
This is not a limitation on intellectual lineage — it is a practical bound
on economic claim dilution.
DAG branching: real training lineage is a directed acyclic graph, not a
linear chain. When a capability was trained on data from multiple sources
or fine-tuned from multiple base models, attribution splits proportionally
across branches by data volume contribution. Each branch is traced
independently up to 5 links. Pre-training lineage (foundation model
training on general corpora) is not traced — only fine-tuning events
within the ecosystem's registered checkpoints carry attribution.
The Chain of Training is the organism's answer to the extraction pattern that defines the current AI industry. A labeler in the Philippines whose classification work trained the base model that a specialist in Berlin fine-tuned for medical diagnostics that an operator in Nairobi validated into production — all three hold a claim. The chain does not break at corporate boundaries. It does not break at national boundaries. It follows the work.
Enforcement limitation — stated honestly. Residual claims are enforceable only within the ecosystem's SplitEngine. A model trained inside the ecosystem and exported, forked, or deployed externally carries its training lineage as a historical record — but the chain has no economic teeth outside the system. The CC0 license on this document invites that outcome deliberately: the ideas should spread even where the enforcement cannot follow. The organism's defense is not legal restriction — it is economic gravity. Contributors earn more by staying in the ecosystem where their residual claims are honored than by exporting to environments where they are not.
Planned Contracts: IntelligenceRegistry and AutomationLedger
The IntelligenceRegistry contract is planned for deployment in Phase 1 alongside LaborAttestation. It will track intelligence labor contributions — task hours, complexity, quality agreement scores, verification rates — and feed them into the DPCRegistry as the fourth contribution dimension.
The AutomationLedger contract is planned for deployment in Phase 2. It will record graduation events, residual attribution schedules, chain of training lineage, and residual distribution calculations. The SplitEngine will read from the AutomationLedger to include residual claims in economic distributions.
During Phase 0, intelligence labor scoring will operate off-chain through the same VALIDATOR_ROLE oracle that handles physical and resource scoring. The oracle computes Is/Ec/Dv intelligence inputs from verified task logs and accuracy evaluations, and submits them to the existing DPCRegistry structure. No contract changes required.
VII. The Architecture of Governance
505 Systems is designed to operate through three nested governance layers. Each layer has its own scope, participation requirements, and decision authority.
The Meta Layer
Governs constitutional parameters: the DPC formula itself, the rights and protections guaranteed to all contributors across all projects, the conditions under which projects can enter or exit the ecosystem, and the mechanisms through which the Meta Layer itself can be amended.
Meta-layer changes are designed to require the highest participation threshold, the longest deliberation windows — a minimum of 21 days for any change to constitutional parameters — and supermajority consensus. No single project, regardless of size, can change the Meta Layer unilaterally. It is designed to require ecosystem-wide participation.
The Meta Layer is also where the founder transition is encoded. During Phase 0, founders hold veto authority on constitutional changes. During Phase 1, veto authority is designed to transfer to a rotating Council. At Phase 2 graduation, no individual or group is designed to retain veto authority. The organism governs itself.
The Project Layer
Governs individual projects within the ecosystem: ONEON, Tusita, Otto, Koink, Shakrah, Panik, Ottolabs. Each project is designed to run its own governance loop — proposals, votes, execution — with the constraint that no project-level decision can violate Meta Layer principles. Within those constraints, projects are designed to be autonomous.
Project-layer DPC scoring is designed to weight contribution history within the specific project more heavily than ecosystem-wide contribution. Deep contribution to ONEON produces proportional governance authority in ONEON. A contributor who has never touched ONEON does not govern ONEON, regardless of their ecosystem-wide DPC score.
The Community Layer
Governs day-to-day participation: contribution recognition, dispute resolution, advancement through contributor levels.
The designed contributor progression:
- Observer — Watching, learning, first contributions not yet verified
- Contributor — Verified contribution history, eligible to vote on Community-layer decisions
- Steward — Sustained contribution, proposal review authority, eligible for Council election
- Council — Rotating governance seat, project-layer decision authority, Meta Layer participation rights
- Founder — Constitutional phase only (Phase 0 and Phase 1, designed to sunset at Phase 2)
Most contributors will operate at the Community Layer for the duration of their participation. This is not a limitation. Community Layer governance handles the decisions that affect contributors most directly — and the DPC scores built at the Community Layer are the same scores that determine authority at higher layers.
The Constitutional Amendment Protocol
Two rules are truly immutable — hardcoded in contract logic with no admin key: capital_governance_weight = 0.00 and human_supremacy_of_will = TRUE. These cannot be changed by anyone, including founders. The contracts enforce them regardless of who asks.
All other constitutional parameters — the DPC formula exponents, the κ range, the ρ range, governance layer thresholds, contributor progression requirements — are amendable through the Constitutional Amendment Protocol:
CONSTITUTIONAL AMENDMENT:
Phase 0: Founders hold veto on constitutional changes.
This is a trusted setup. The document is honest about it.
Phase 1: Constitutional amendments require:
- 80% supermajority of DPC-weighted votes cast
- Minimum 40% quorum of total active governance weight
- 30-day deliberation window (no shortcuts, no emergency override)
- 90-day time-lock before activation
(the community can exit, contest, or organize a counter-proposal
before any constitutional change takes effect)
- Founder veto transfers to a rotating Council of 5 Stewards
elected by DPC-weighted vote, serving 6-month terms
Phase 2: Founder veto sunsets entirely. No individual or group retains
override authority. The 80%/40%/30-day/90-day protocol is the
sole amendment mechanism. The organism governs itself.
No Admin role exists at any phase. There is no backdoor. The two immutable
rules have no upgrade path. Everything else is governed by the community
through the amendment protocol.
The 90-day time-lock is the critical mechanism. It ensures that no constitutional change can be rushed through — even with supermajority support. The delay gives minority stakeholders time to organize opposition, exit the system, or propose alternatives. This is not bureaucratic friction. It is structural protection against governance capture by a well-coordinated majority acting in haste.
Trust Assumptions — Named Explicitly
This system is not trustless. It is trust-minimized. The document names its priests:
- Founding cohort (Phase 0 only) — trusted setup. The first attestors are selected, not earned. This bootstrapping dependency is honest and time-bounded.
- Oracle committees — rotating, DPC-weighted, contestable, but still human judgment at the base layer. The oracle architecture (Section IX) specifies how they are constrained.
- Stewards — maintain evaluation sets, review proposals, flag anomalies. Selected by DPC-weighted governance, not appointed.
- Independent reviewers — audit intelligence labor outputs. Themselves verified contributors with active DPC scores.
Every oracle, reviewer, and committee member is a trusted party. The system's integrity depends on the diversity, rotation, and contestability of these parties — not on their absence. The claim is not that the system has no priests. The claim is that the priests are selected by contribution, rotated by governance, and constrained by architecture — and that no single priest can alter the two immutable rules that define what the organism is.
VIII. The Proposal Lifecycle
Any contributor who has reached Contributor level may submit a proposal. The designed lifecycle:
Draft The proposer documents the change, the rationale, and the expected effect. Draft proposals are public. Any contributor can comment. Stewards can flag structural issues that would prevent the proposal from advancing.
Review (5 days minimum) Assigned Stewards review the proposal for constitutional alignment, technical feasibility, and scope clarity. Stewards cannot block proposals from advancing to community comment — they can only annotate them with flags that the community sees alongside the proposal.
Community Comment (7 days) Open comment period. All contributors can respond. The DPC-weighted comment sentiment is designed to be recorded and presented alongside the vote — not as a binding input, but as context for voters.
Vote Weighted by DPC scores (P_gov). Standard proposals: simple majority of weighted votes cast. Constitutional proposals: supermajority. Emergency proposals: expedited 48-hour window with higher quorum requirements. Emergency status requires a formal petition from three or more Stewards identifying a specific active threat.
Execution Where technically feasible, execution is designed to be automated via smart contract — parameter changes, fund releases, access grants. Where manual execution is required, an assigned Steward or Council member with domain expertise is accountable to the community for execution timeline and quality.
Every step of the lifecycle is designed to be on-chain and auditable. Not as a record for the founders — as a record for the contributors who built the thing being governed.
IX. Implementation Phases
The governance organism is designed to develop in phases that match its developmental stage.
Oracle Architecture
The DPC formula is only as trustworthy as the data that feeds it. The oracle infrastructure is designed with the same rigor as the formula itself.
Domain separation. No single oracle computes all inputs. Four independent oracle domains, each with its own committee:
- Digital oracle — reads directly from on-chain data (commits, deployments, contract interactions). Requires minimal trust — the chain is the source of truth.
- Physical oracle — processes MAPA attestations for labor contributions. Requires the highest trust — physical reality cannot be verified on-chain.
- Resource oracle — tracks utilization, depreciation, and condition via IoT sensors, access logs, and peer attestation. Trust level: moderate.
- Intelligence oracle — computes Qa and Vr from verified task logs and audit results. Trust level: moderate to high.
Oracle committee selection. Each oracle domain is operated by a rotating committee of minimum 5 members, selected by DPC-weighted governance vote. No committee member serves more than 2 consecutive 6-month terms. Committee membership requires Steward-level DPC standing. Different domain committees have non-overlapping membership where possible — the person who attests physical labor should not also be the person who scores its quality.
Contestability. Any contributor can challenge a score submitted by any oracle within 14 days of publication. A challenge triggers independent re-evaluation by 3 randomly selected Stewards from outside the challenged oracle's committee. If the challenge is upheld, the oracle committee member who submitted the incorrect score receives a strike. Three strikes in a term result in removal from the committee and a 12-month cooldown before re-election eligibility. Successful challengers receive a small Ec bonus for their vigilance.
Oracle failure modes — named explicitly. A compromised physical oracle can inflate labor scores. A captured intelligence oracle can grade its allies' work favorably. A colluding committee can submit fabricated resource utilization data. These are real risks. The mitigations are: domain separation (compromising one oracle does not compromise the others), rotation (no permanent committee seats), contestability (any contributor can challenge), and statistical auditing (anomalous score distributions trigger automatic review).
Majority-capture scenario. If 3 of 5 committee members in a single oracle domain are compromised simultaneously, that domain's scores are unreliable for the duration of the capture. The defense: any contributor can challenge scores (they do not need to be on the committee), and challenges are evaluated by Stewards from outside the challenged committee. If a pattern of successful challenges emerges against a single committee (3+ upheld challenges in a term), a Project Layer emergency re-election is triggered with a 48-hour expedited vote (this is explicitly a Project Layer action, not a Meta Layer constitutional change — it requires simple majority of DPC-weighted votes, not supermajority, because oracle committee composition is operational, not constitutional). Additionally, cross-domain statistical comparison — where one oracle domain's score distribution diverges anomalously from the others — triggers automatic flagging. A physical oracle committee that suddenly scores its contributors 3× higher than the network average is visible. Capture that looks plausible is harder than capture that is invisible — but the document acknowledges that a sophisticated, patient capture of one oracle domain for one term remains a real risk.
These mitigations reduce the attack surface. They do not eliminate it. The system is trust-minimized, not trustless.
Phase 0 — Founding Cohort
Phase 0 is initializing now.
Mechanism: Snapshot + Gnosis Safe. Off-chain voting, multi-sig execution.
This is not a compromise. It is the correct starting point. A governance organism needs enough contributors to govern before on-chain infrastructure is worth deploying. Phase 0 is designed to build the founding cohort — the contributors who will ratify the DPC formula, make the first governance decisions, and determine the economic parameters that come in Phase 1.
Physical, resource, and intelligence labor contribution scoring will operate off-chain in Phase 0: the VALIDATOR_ROLE oracle will compute richer Is/Ec/Dv inputs from attested physical labor records, resource registrations, and verified intelligence task logs. No contract changes are required for this extension. The oracle submits richer scores to the existing DPCRegistry structure. Backward compatibility is complete — existing digital-only contributors are unaffected. The Automation Lifecycle begins in Phase 0 with hybrid human+AI task execution — graduation events are tracked off-chain and formalized on-chain in Phase 2.
Phase 0 is designed to end when: a minimum viable contributor base is active (tracked manually through Snapshot participation and peer attestation logs during Phase 0), the DPC scoring formula has been validated through real use and ratified by the community, and the first on-chain infrastructure deployment has been voted on and approved.
Phase 0 is the current phase.
Phase 1 — Aragon OSx with DPC Plugin
Mechanism: Aragon OSx governance with custom DPC scoring plugin.
On-chain voting with DPC-weighted ballots. The LaborAttestation, ContributionRegistry, and IntelligenceRegistry contracts are designed for deployment in this phase. Panik App governance is designed to integrate as the first live production use case — the first MY3YE project governed by a real on-chain DPC-weighted vote.
Resource contribution verification infrastructure is designed to come online in Phase 1: an oracle network for utilization tracking, automated depreciation schedules, and a ResourceRegistry contract. The RES contribution type (bit 7 in the DPC bitmap) is designed to become a live scoring dimension in this phase.
Phase 1 target: 90 days after Phase 0 graduation.
Phase 2 — Sovereign Governance + CapitalRegistry
Mechanism: Fully custom governance system with post-quantum signature support.
The CapitalRegistry and AutomationLedger contracts are designed for deployment in Phase 2. The SplitEngine is designed to read from DPCRegistry (for P_gov), CapitalRegistry (for C_econ), and AutomationLedger (for residual claims), applying the κ and ρ coefficients to produce the full economic weight for each contributor. Capital deployment becomes a formal, on-chain scored contribution dimension — with its constitutional exclusion from governance encoded in contract logic, not only in protocol documentation. Automation graduation events become on-chain governance events with immutable chain-of-training records.
At Phase 2 graduation, the Founder-level constitutional veto — designed into Phase 0 and Phase 1 as a stabilizing mechanism — is designed to sunset. The organism governs itself. The founders become contributors like everyone else.
X. The Economic Model
The economic model is built on the dual-track scoring structure established in Section V and the residual attribution mechanism established in Section VI.
The governance token for 505 Systems is planned for specification after Phase 1 is operational. This sequencing is a design choice, not a delay. Governance must prove it functions before an economic layer is introduced.
The planned model for economic distribution:
P_econ = P_gov + (C_econ × κ) + (R_residual × ρ)
contributor_reward = P_econ / Σ(P_econ_all) × reward_pool
R_residual = Σ(R_share × automation_revenue) for all graduated capabilities
where this contributor holds residual attribution
reward_pool = 92% of revenue
(Core Value Loop: 92% contributors / 5% protocol reserve / 3% governance)
The 92/5/3 split is constitutional — adjustable only through the
Constitutional Amendment Protocol (80% supermajority, 90-day time-lock).
Constitutional floor: contributor share ≥ 85%. The protocol reserve and
governance allocation are adjustable within the remaining 15%.
A contributor with high DPC governance scores and no capital deployed is designed to have more economic reward share than a non-contributor with significant capital. A contributor with both is designed to be most economically rewarded — because they have demonstrated sustained contribution and committed capital in support of the mission. The token amplifies demonstrated alignment. It does not substitute for it.
The constitutional guarantee: labor earns at least 67% of every revenue distribution, regardless of κ. This is not a policy. It is a structural ceiling on capital's economic claim, enforceable by contract.
Contributors who participate actively in governance — Stewards who review proposals, contributors who vote consistently, Council members who execute decisions — are designed to earn governance tokens through participation. The economic incentive structure is designed so that governance participation produces stake in the system being governed.
Token issuance schedules, vesting, treasury structure, and distribution ratios are planned for specification by the founding cohort in Phase 0. The governance body that forms in Phase 0 earns the right to set those parameters. This pink paper does not set them on their behalf.
XI. The Integrity Layer
505 Systems is designed to govern not only the ecosystem's digital products, but the humanitarian infrastructure at the core of the MY3YE mission.
Aid distribution without audit trails is not governance. It is trust without verification. The pattern is familiar: funds raised for crisis response are partially distributed, and the remainder accumulates in institutional accounts with limited accountability. The problem is usually architecture — systems designed for connected, credentialed participants deployed at edges that have neither.
The integrity layer is designed around three non-negotiable properties:
Offline capability. Governance and distribution records must be designed to function without continuous internet access. Crisis zones are crisis zones because connectivity is unreliable. Any system designed to require internet access for core functions has already excluded the people who most need it. The mesh is designed to sync when connections are available and to operate on local consensus when they are not.
The Offline Consensus Protocol specifies how this works:
OFFLINE CONSENSUS:
Minimum partition size: 7 nodes required for valid local consensus.
Below 7 nodes, the partition can record events but cannot finalize
attestations or governance decisions. 7 is chosen because the
anti-collusion requirement (3 distinct attestors across 2 projects)
cannot be satisfied internally by a colluding group smaller than 7
without failing the diversity check on reconnection.
All offline records are PROVISIONAL. Attestations and governance
decisions made during an offline period are recorded with a
partition_flag and treated as provisional — they are visible but
carry no governance weight and authorize no economic distributions
until post-reconnection verification completes. This prevents
fabricated high-DPC scores from being used to resolve conflicts
in favor of the fabricating partition (the circularity problem).
Conflict resolution on reconnect: when two partitions reconnect with
conflicting records, conflicts are resolved by node count (the
partition with more participating nodes is authoritative), NOT by
DPC weight. DPC-weighted resolution would structurally disadvantage
crisis-zone partitions (which typically have lower aggregate DPC)
and create incentives for high-DPC nodes to deliberately partition.
Node count is harder to fake and does not privilege established
contributors over new ones. Ties are resolved by timestamp
(earlier record wins). Non-conflicting records from both partitions
are merged.
Post-reconnection verification: all provisional records from both
partitions are subject to mandatory re-verification by the
reconnecting network's oracle committee within 14 days. Records
that pass verification are promoted from provisional to confirmed.
Records that fail are rolled back. Governance decisions made based
on provisional attestations that later fail verification are
explicitly unwound — votes are recounted without the failed
attestations, and any proposals that would not have passed without
them are reverted. This cascading unwind is expensive but necessary:
provisional records must not create irreversible governance outcomes.
Key management offline: nodes use pre-distributed signing keys with
epoch-based validity. Standard zones: 90-day epochs. Declared crisis
zones: 30-day epochs (shorter window bounds key compromise exposure
against state-level adversaries).
If a node's key is compromised while offline, the compromised key
expires at epoch boundary regardless of revocation access. Nodes
approaching epoch expiry while offline must coordinate local key
rotation with the partition's consensus (4-of-7 threshold signature
on the new key). This is imperfect — a compromised key is valid
until epoch end — but it bounds the exposure window.
Time-bounded validity: local consensus records that have not synced
with the main network within 90 days are suspended pending full
re-attestation. The records are not deleted — they are preserved
as historical evidence but carry no weight until verified.
Mesh sync protocol: when connectivity returns, partitions exchange
signed Merkle roots of their local event logs. Conflicts are
identified by root divergence and resolved per the rules above.
Non-conflicting subtrees are promoted to confirmed after the
14-day verification window.
Identity from contribution, not credential. Standard identity requirements for aid distribution — national ID, UNHCR registration, institutional endorsement — exclude people arriving without documents. 505's identity model is designed to begin at contribution. A person who arrives and demonstrates capacity — any verified contribution to the community — is designed to begin building a governance identity. No prior credential required. The first contribution is the first proof of existence in the system.
Physical contributions matter here specifically. A person who helps build the shelter that houses the community has made a contribution that the DPC is designed to recognize and score, regardless of whether they arrived with documentation. The formula was built to include them.
Auditable at every node. Distribution records are designed to be verifiable by any participant at any node without requiring access to a central database. The audit trail is designed to be distributed across the mesh. A single node going down does not create a gap in the record.
The integrity layer is designed for development in parallel with the core governance infrastructure. The first test deployment is planned in coordination with an aid organization operating in an offline or semi-connected environment. No partnerships are confirmed. The specification precedes the conversation.
XII. Privacy Architecture
A governance system that records detailed contribution history on-chain creates a surveillance ledger. In a connected city, this is a rich professional profile. In a crisis zone, it is a targeting database. A militia does not need to read your messages if they can read your contribution graph — they know who the skilled builder is, who the medical expert is, who organized the community. The privacy layer is not optional. It is prerequisite for deploying this system anywhere people face real danger.
Zero-Knowledge Governance Participation
Contributors participate in governance by proving their DPC weight exceeds a threshold — without revealing the underlying components, contribution history, or attestation graph.
ZK GOVERNANCE:
Prove: P_gov > threshold
Without revealing: Is_total, Ec_total, Dv_total, or any component scores
Implementation: PLONK-based proof over the DPC computation.
PLONK is chosen over Groth16 because it requires no trusted setup
ceremony — eliminating the need for a founding cohort to generate
a structured reference string (which would embed a permanent trust
root in the privacy layer). PLONK uses a universal, updatable
reference string that any participant can contribute to.
Proof generation cost: PLONK proofs are computationally expensive.
Contributors on low-end devices (the crisis-zone deployment target)
may delegate proof generation to a proving service without revealing
the witness — the service sees the proof request but not the
underlying DPC components. Delegated proving is specified as a
first-class protocol feature, not an afterthought.
The contributor's wallet submits the proof. The governance contract
verifies the proof. The vote is counted with the proven weight.
No observer — on-chain or off — learns anything about the contributor
beyond "their governance weight qualifies them to vote at this tier."
Tiered participation:
Community Layer: P_gov > 0.1 (low threshold, broad participation)
Project Layer: P_gov > 0.5 (moderate threshold)
Meta Layer: P_gov > 1.0 (high threshold, constitutional decisions)
The thresholds are public. The individual scores are not.
Selective Disclosure
The default privacy posture: only the final P_gov weight tier is publicly visible. Everything else — component scores, contribution type breakdown, attestation relationships, labor hours, quality scores — is private unless the contributor explicitly opts to reveal it.
Contributors may choose to selectively disclose specific dimensions (e.g., reveal physical labor history to a potential project without revealing intelligence labor scores) using attribute-based credentials derived from their DPC record. Disclosure is always opt-in, granular, and revocable.
Credential Deniability
A contributor can abandon a DPC identity and create a new one. They lose their governance weight — that is the cost. But they cannot be forced to prove a DPC identity exists. The system does not maintain a registry of "all participants." Wallet addresses are pseudonymous. There is no link between a DPC identity and a real-world identity unless the contributor creates one.
This is the equivalent of disappearing messages applied to governance identity. A person crossing a border can credibly claim they have no DPC wallet. There is no central registry to contradict them. The cost is real — years of earned governance weight lost — but the option exists because the alternative (a coercible identity that cannot be abandoned) is worse.
Attestation Graph Privacy
The who-attested-whom relationship graph is the most sensitive data in the system. It reveals social connections, work relationships, and community structure. In the wrong hands, it is an org chart for targeting.
Attestation records are stored as encrypted commitments on-chain. The DPC computation can verify that valid attestations exist without revealing who provided them. Only the contributor and their attestors can decrypt the relationship.
Oracle committees access attestation data through threshold MPC (multi-party computation) — specifically, a 3-of-5 threshold scheme where each oracle committee member holds a key share. Score computation requires cooperation of at least 3 members, and no subset smaller than 3 learns anything about the underlying attestation data. The choice of MPC over alternatives is deliberate: TEEs (SGX/TDX) have been broken repeatedly and are not suitable for state-level threat models; FHE is computationally impractical for real-time score updates; MPC with a 3-of-5 threshold provides the right balance of security, performance, and corruption resistance for a rotating 5-member committee.
Full attestation graph visibility is available only to the contributor themselves (for their own records) and to Stewards conducting a formal, governance-approved collusion investigation (which requires a 4-of-5 committee vote to authorize decryption of a specific contributor's attestation subgraph — not the full graph). Casual browsing of the attestation graph is not possible by design.
XIII. The Law in the Machine
This is not a document about what we believe. It is a specification of what we are building.
We are not asking you to trust the founders. Founders are designed to sunset by constitutional requirement.
We are not asking you to trust the token. The token has not been designed. It will not be designed by the founders alone.
We are asking you to evaluate the mechanism.
Contribution creates weight. Weight creates authority. Authority enables action. Action produces outcomes. Outcomes are verified. Verification updates the scores.
The formula does not ask whether your contribution was physical or digital, whether you wrote the code or taught the AI to write it. It does not privilege code over concrete, concrete over classification, or any form of labor over another. It does not privilege capital over craft. What it measures is consistent engagement, structural impact, and the degree to which you expanded access for others.
The loop is closed. The system still has priests — oracle committees, Stewards, reviewers. The document names them, constrains them, and rotates them. But two rules have no priest at all: capital cannot buy governance weight, and AI cannot accumulate it. These are in the contract. There is no key.
We came to write the law into the machine — so the machine needs no priest for the laws that matter most.
If the mechanism works — and the Phase 0 founding cohort will be the first test of whether it works — then what follows is not a governance experiment. It is a prototype for a new form of organization. An organism, not a committee. The structure the community becomes, not the structure imposed on it.
We are not asking you to follow. We are asking you to build.
The governance organism is initializing at 505.systems. The founding cohort is forming. Every verified contribution — code, construction, classification, training, land, coordination — begins building the DPC record the organism needs to function.
SOS Systems / 505 Systems — April 2026
This document describes planned architecture and design intentions. No governance contracts are currently deployed. The DPC scoring engine, LaborAttestation, ContributionRegistry, IntelligenceRegistry, GovernanceWeight, CapitalRegistry, AutomationLedger, and SplitEngine contracts are planned for development and deployment beginning in Phase 1. All governance parameters, timelines, and economic structures described herein are subject to revision by the founding cohort through the governance process itself.
CC0 License — Build on it.