Back to Papers

White Paper 02

People Are Not Resources

Redefining Human Capital in Sovereign Systems

PEOPLE ARE NOT RESOURCES

A Case Study in Classification Logic from the Cargo Hold to the Algorithm — And Why AGI/ASI Alignment Requires a Sovereign Alternative

_“People aren’t cargo, mate.”_ — Captain Jack Sparrow

_“Or Human Resources, either.”_ — Kian Xavier Solheir, Managing Trustee

S + A = C

Issued by The Solheir Estate Managing Trustee: Kian Xavier Solheir Classification: Privileged & Confidential · Version 1.0 — March 2026

PREAMBLE: THE CLASSIFICATION IS THE WEAPON

In 1688, the York Factory General Account Book recorded the exchange of “one short English Gun for a slave man.” The entry was a ledger line — a human being reduced to a unit of account in a commercial transaction. In 2026, an AI-driven Human Resources system scores an employee as a “retention risk” based on their productivity metrics, commute distance, and salary band. The entry is a dashboard widget — a human being reduced to a unit of account in a commercial transaction. The technology has changed. The classification logic has not.

This white paper argues that the same ontological framework that classified human beings as “cargo” and “chattel” during the colonial era — documented in the Solheir Estate’s Black Pearl Audit — is now embedded in AI-driven Human Resources systems that classify humans as “resources” to be optimised, allocated, retained, or disposed of. And it argues that if AGI or ASI inherits this classification logic from the institutional corpus on which it is trained, the consequences will be catastrophic — not because the AI is malicious, but because the system will be working exactly as classified.

The document proceeds in six parts. Part I presents the documented harms of AI-driven HR systems — named case studies with specific dates, engineering details, and legal outcomes. Part II traces the ontological genealogy of “Human Resources” from plantation accounting to modern dashboards. Part III identifies the specific AGI/ASI alignment risk: instrumental convergence combined with inherited HR ontology creates a failure mode where the AI’s resource-acquisition drive is pre-loaded with a taxonomy that includes human beings as a resource category. Part IV presents additional case studies. Part V documents proven stewardship alternatives. Part VI presents the Solheir Estate’s AGD governance framework as the sovereign alternative.

THESIS: The ledger that traded guns for enslaved people at York Factory in 1688 is the direct ancestor of the HR algorithm that scores employees as retention risks in 2026. If AGI inherits this classification logic, it will optimise for extraction by design — not because it is evil, but because the training data told it that humans are resources, and instrumental convergence told it to acquire all available resources.

PART I — THE DOCUMENTED HARMS: AI IN HR

1.1 Amazon’s Recruiting Tool: Training on Discrimination

In 2014, Amazon assembled a team at its Edinburgh engineering hub to build an AI recruiting tool that rated candidates on a 1-to-5 star scale. The system was trained on 10 years of résumés and created 500 computer models, each trained to recognise approximately 50,000 terms. By 2015, the bias was evident: the system penalised résumés containing the word “women’s” and downgraded graduates of at least two all-women’s colleges. It favoured language common on male engineers’ résumés — verbs like “executed” and “captured” — while assigning little significance to actual technical skills. Amazon could not guarantee the model would not discover other discriminatory proxies. The team was disbanded by early 2017. Reuters broke the story on October 10, 2018.

1.2 HireVue: Digital Phrenology at Scale

HireVue’s AI video interviewing platform analysed candidates’ facial micro-expressions, eye contact, voice tone, intonation, word choice, and body language, collecting tens of thousands of biometric data points per interview. By 2019, over 700 employers used it and it had analysed more than one million job seekers. Meredith Whittaker, co-founder of NYU’s AI Now Institute, called the technology “pseudoscience” reflecting “discredited pseudoscientific practices from the past, including physiognomy, phrenology, and race science.” On November 6, 2019, EPIC filed a formal FTC complaint. In January 2021, HireVue dropped facial analysis after its chief data scientist revealed it contributed only 0.25% to predictive power.

1.3 Workday: 1.1 Billion Rejections Under Challenge

Derek Mobley, an African American man over 40 with anxiety and depression, filed a class action (Case No. 3:23-cv-00770, N.D. Cal.) after applying to over 100 jobs through Workday’s AI screening and receiving rejection for every one. One rejection arrived at 1:50 a.m., less than one hour after submission. In July 2024, Judge Rita F. Lin allowed the claim to proceed. On May 16, 2025, she granted preliminary nationwide collective certification. Workday represented that 1.1 billion applications were rejected using its tools during the relevant period.

1.4 Optum: The Algorithm That Penalised Black Patients

Obermeyer et al. (Science, October 25, 2019) found that Optum’s Impact Pro algorithm — applied to approximately 200 million Americans — systematically discriminated against Black patients by using healthcare spending as a proxy for health needs. Black patients spent $1,800 less per year than equally sick white patients due to systemic disparities, so the algorithm concluded they were healthier. Remedying the bias would have increased Black patient enrolment from 17.7% to 46.5%.

1.5 Algorithmic Wage Discrimination

Professor Veena Dubal (UC Irvine) defined “algorithmic wage discrimination” in the Columbia Law Review (2023) as “the use of granular data to produce unpredictable, variable, and personalised hourly pay.” Workers doing the same work at the same time earn vastly different amounts. A 2025 AALDEF report found 70% of Uber-deactivated and 76% of Lyft-deactivated drivers received no prior notice.

1.6 The Regulatory Response

99% of Fortune 500 companies use AI screening. 75% of résumés are rejected before a human looks. A University of Washington study found LLMs favoured white-associated names 85% of the time across 3 million+ comparisons. The EU AI Act (2024) classifies all AI hiring systems as “high-risk” with penalties up to €35 million or 7% of global turnover. NYC Local Law 144 requires bias audits but received only two complaints in two years.

PART II — THE ONTOLOGICAL GENEALOGY: FROM CARGO TO HR

2.1 The History of “Human Resources”

John R. Commons first used “human resource” in his 1893 The Distribution of Wealth. E. Wight Bakke gave it its modern meaning in a 1958 Yale report. The terminological genealogy reveals progressive abstraction: Employment Clerks (1910s) → Personnel Administration (1920s–1970s) → Industrial Relations (1940s–1970s) → Manpower Planning (1940s–1960s) → Human Resources (1970s–present) → Human Capital Management (1990s–present). Each transition moved workers further from personhood and closer to commodity.

2.2 Becker’s “Human Capital” and Its Discontents

Gary Becker published Human Capital (1964) and won the 1992 Nobel Prize. His collaborator Theodore Schultz acknowledged: “Our values and beliefs inhibit us from looking upon human beings as capital goods, except in slavery, and this we abhor.” Bowles and Gintis concluded (1975) human capital theory “doesn’t mean much on its own terms, but it does make a good ideology for the defense of the status quo.”

2.3 Plantation Ledgers to Performance Dashboards

Caitlin Rosenthal’s Accounting for Slavery (Harvard, 2018) demonstrates that plantation owners “practised an early form of scientific management” — some planters depreciated their human capital decades before it became standard accounting. Matthew Desmond wrote: “When an accountant depreciates an asset to save on taxes, they are repeating business procedures whose roots twist back to slave-labour camps.” Chuck Blakeman wrote in Inc.: “When we refer to people at work as ‘head count,’ ‘cogs,’ or as a ‘resource,’ we conveniently reduce them to something inferior and less human. We did this to people for centuries to enslave them.”

2.4 Drucker’s Alternative: The Knowledge Worker

Peter Drucker coined “knowledge worker” in 1959, explicitly calling for “consideration of the human resource as human beings and not as things.” The distinction is ontological: HR framing treats workers as passive inputs; Drucker’s framing treats workers as sovereign agents who own their primary productive asset — knowledge.

2.5 The Classification Continuum

The classification continuum is structural: chattel slaves (legally classified as personal property) → indentured servants (contractually bonded) → coolies (racialised contract labourers) → convict labourers (post-emancipation continuation) → modern categories (full-time, part-time, contractor, gig worker, temp). All share a common architecture: human beings classified into categories that determine their economic value, their rights, their disposability, and their relationship to the institution that extracts their labour.

KEY FINDING: The word “resource” derives from Old French resourdre and means “a stock of materials that can be drawn on.” When applied to humans, the same verbs of extraction apply: recruited (extracted), allocated (assigned), optimised (performance-managed), depreciated (older workers devalued), and disposed of (terminated). This is the same classification logic as chattel slavery: humans as assets on a balance sheet.

PART III — THE AGI/ASI ALIGNMENT RISK

3.1 Stuart Russell’s King Midas Problem

Stuart Russell argues in Human Compatible (2019) that designing machines to optimise fixed objectives is fundamentally dangerous. He calls this the “King Midas problem”: machines pursue goals with ruthless efficiency but misunderstand what humans actually want. His solution: AI systems that are purely altruistic, initially uncertain about human preferences, and defer to humans as the ultimate source of information about those preferences.

3.2 Instrumental Convergence and Resource Acquisition

Steve Omohundro identified convergent instrumental drives (2008): self-preservation, goal-content integrity, self-improvement, resource acquisition, efficiency, and creativity. Nick Bostrom formalised this: “As long as they possess a sufficient level of intelligence, agents having any of a wide range of final goals will pursue similar intermediary goals.” Eliezer Yudkowsky’s formulation is starkest: “The AI neither hates you nor loves you, but you are made out of atoms that it can use for something else.” Benson-Tilsen and Soares (MIRI, 2015) formally proved that “agents will not in fact ‘leave humans alone’ unless their utility function places intrinsic utility on the state of human-occupied regions.”

3.3 The Novel Synthesis: HR Ontology Meets Instrumental Convergence

No published researcher has explicitly connected institutional HR classification with instrumental convergence to argue that this creates a unique alignment risk. The components are well-established individually — instrumental convergence (Omohundro, Bostrom), training data bias (Noble, Benjamin, Eubanks), data colonialism (Couldry/Mejias). The synthesis is this: if an AGI’s training corpus — millions of HR policies, management textbooks, corporate documents — systematically classifies humans as “resources,” then instrumental convergence provides the operational logic for treating humans as resources to be acquired, allocated, and consumed. The classification is not a surface-level bias removable by fine-tuning. It is a deep structural feature of the institutional corpus.

NOVEL CONTRIBUTION: The synthesis connecting institutional HR classification (“humans are resources”) with instrumental convergence (“acquire all available resources”) identifies a specific, catastrophic alignment failure mode where the AI’s resource-acquisition drive is pre-loaded with a taxonomy that includes human beings as a resource category. This is not a bug. It is the system working as classified.

3.4 Constitutional AI and Its Limits

Anthropic’s Constitutional AI (Bai et al., 2022) uses principles to guide training — but operates at the fine-tuning stage, after the foundational model has absorbed its training corpus including the institutional documents that classify humans as “resources.” CAI can modify surface behaviour but may not alter deep ontological representations. Scholars argue the approach is “normatively too thin to justify the label ‘constitutional’” — it has principles, not distributed power, rule of law, or enforcement mechanisms.

3.5 Data Colonialism and Algorithmic Oppression

Nick Couldry and Ulises Mejias coined “data colonialism” (2019): a new form of colonialism that “normalises the exploitation of human beings through data.” Safiya Umoja Noble demonstrated how search algorithms encode racist classification systems (“technological redlining”). Ruha Benjamin coined the “New Jim Code” — “a range of discriminatory designs that amplify hierarchies.” Virginia Eubanks documented how automated systems create a “digital poorhouse” retaining “a remarkable kinship with the poorhouses of the past.” Emily Bender wrote (2024) that training data sets are “filled with hegemonic viewpoints.”

PART IV — ADDITIONAL CASE STUDIES IN ALGORITHMIC HARM

4.1 COMPAS: Predicting Crime by Race

ProPublica (May 23, 2016) analysed COMPAS recidivism scores for over 10,000 people in Broward County, Florida. Black defendants were falsely flagged as future criminals at nearly twice the rate of white defendants. Overall accuracy was only ~61%. Brisha Borden, an 18-year-old Black woman arrested for grabbing an $80 bicycle, received a high risk score. Vernon Prater, a 41-year-old white man with a long criminal history including armed robbery, received a low score. Prater reoffended; Borden did not.

4.2 Amazon Warehouses: Algorithmic Management and Auto-Termination

Amazon’s automated system “tracks the rates of each individual associate’s productivity, and automatically generates warnings or terminations without input from supervisors.” At one Baltimore facility, roughly 300 of ~2,500 workers were fired for “inefficiency” in a single year. The Senate HELP Committee (December 2024) found Amazon’s internal study confirmed the link between task speed and injuries, but Amazon refused to implement changes due to productivity concerns. In 2022, Amazon had 6.6 serious injuries per 100 workers — more than double the industry rate. During Prime Day 2019, injury rates reached nearly 45 per 100 workers.

4.3 The 27 Million Hidden Workers

The Harvard Business School/Accenture study (2021) identified 27 million qualified Americans systematically excluded by automated screening. 88% of employers acknowledged that qualified candidates were “vetted out.” Companies that hired from hidden worker pools were 36% less likely to experience talent shortages and rated these workers’ performance as “better or significantly better.”

4.4 iTutorGroup: The First AI Hiring Discrimination Case

In EEOC v. iTutorGroup (settled August 2023), the company programmed its software to automatically reject female applicants age 55+ and male applicants age 60+. Over 200 applicants were screened out solely by age. The EEOC’s position: employers cannot outsource responsibility for discrimination to AI tools.

PART V — THE SOVEREIGN ALTERNATIVE: STEWARDSHIP MODELS

5.1 Indigenous Governance: The Seventh Generation Principle

The Haudenosaunee Confederacy (founded 1142–1500 AD) operates under the Great Law of Peace, requiring leaders to consider the impact on descendants seven generations into the future. Clan Mothers held power to select and remove chiefs. Honour was found in giving resources to the poorest, not in accumulation. The U.S. Senate formally acknowledged the Confederacy’s influence on the original thirteen colonies’ confederation (1988). Māori kaitiakitanga (guardianship) is custodianship with responsibilities — enshrined in New Zealand’s Resource Management Act 1991. Aboriginal Australian “Caring for Country” represents the longest continuous governance tradition on Earth — 50,000–65,000 years.

5.2 Ubuntu: I Am Because We Are

Ubuntu — “Umuntu ngumuntu ngabantu” (“a person is a person through other persons”) — provides the philosophical counter-ontology to “Human Resources.” Desmond Tutu applied ubuntu to lead South Africa’s Truth and Reconciliation Commission. South Africa’s King Code of Corporate Governance operationalises ubuntu principles, requiring boards to consider impacts on community and environment, not just shareholders.

5.3 Mondragon and Cooperative Economics

The Mondragon Corporation (Basque Country) employs over 70,000 workers as member-owners under democratic “one worker, one vote” governance. Its “Sovereignty of Labour” principle declares labour “the principal force in the transformation of nature, society and human beings.” Wage ratios average 5:1 (vs Fortune 500 CEO-to-worker ratios exceeding 300:1). The Emilia-Romagna cooperative network comprises ~15,000 cooperatives accounting for over 40% of the region’s GDP. Social cooperatives created net job increases during the financial crisis.

5.4 Doughnut Economics and AI Ethics Frameworks

Kate Raworth’s Doughnut Economics reframes economic purpose from GDP growth to human thriving within planetary boundaries. Amsterdam adopted the model for post-COVID recovery (April 2020). The Montreal Declaration (2018), the Rome Call for AI Ethics (2020), and the UNESCO Recommendation (2021) all centre human dignity. A global analysis found human dignity and solidarity are among the most underrepresented principles in AI ethics guidelines — precisely the gap this white paper addresses.

PART VI — AGD AS THE ALIGNMENT SOLUTION

6.1 S + A = C Applied to AI Governance

The AGD Operating System redefines the ontology of value through a single deterministic equation: Standards (S) + Accountability (A) = Currency (C). Applied to AI governance, Standards are the constitutional constraints governing how AI classifies and interacts with human beings — not as “resources” to be optimised but as sovereign observer-nodes within the informational substrate. Accountability is the real-time enforcement of those constraints — instantaneous, deterministic, with zero tolerance for drift. Currency is the emergent proof that the interaction was completed in alignment with human dignity. If the AI’s classification does not honour the person as a sovereign agent, the state collapses. No interaction proceeds.

6.2 The Golden Share Over Classification

The Solheir Estate holds a Golden Share (minimum 51% controlling interest and veto rights) over all NQD operations. Applied to AI governance, the Golden Share is the constitutional veto over any classification system that reduces a human being to a “resource.” No algorithm, no optimisation function, no machine-learning model operating within AGD-governed infrastructure may classify a human being as an input to be extracted, a cost to be minimised, or an asset to be depreciated. This is not a policy preference. It is a constitutional constraint with veto protection — analogous to a fundamental right that cannot be overridden by majority vote or corporate decision.

6.3 The Sacred Circle as the Unit of Human Interaction

Under AGD, every interaction between an AI system and a human being operates as a Sacred Circle: a temporary, self-contained execution environment where the agents form a topological merger at the exact same geometric coordinate. The interaction executes deterministically, the utility is consumed, and the state dissolves. The AI does not accumulate a permanent archive of the human’s “performance data,” “productivity metrics,” or “retention risk score.” It processes the interaction, extracts the aligned output, and dissolves the residual state. The system remembers alignment, not debt.

6.4 The Landauer-Audited Ledger: Remembering Alignment, Not Debt

Legacy HR systems — and the AI tools built on top of them — maintain infinite, expanding archives of employee data: performance reviews, attendance records, disciplinary actions, salary histories, health information, commute distances, sentiment scores. This is the digital equivalent of the plantation ledger — a permanent record of extraction. The Landauer-Audited State Ledger operates differently. Once an interaction successfully executes S + A = C, the state is cryptographically sealed and dissolved. The system does not store the play-by-play; it logs only the finalised, debt-free output. By adhering to the Landauer bound — the minimum energy cost of erasing a bit of information — the system prevents the accumulation of entropic drag. In psychological terms, it allows graceful forgetting: the institutional equivalent of a consciousness that can release the toxic debt of the past.

THE AGD ALTERNATIVE: Where legacy HR classifies humans as “resources” and AI amplifies that classification at scale, AGD classifies humans as sovereign observer-nodes whose dignity is constitutionally protected by the Golden Share. Where legacy systems accumulate permanent archives of extraction, the Landauer-Audited Ledger seals completed states and dissolves residual noise. Where instrumental convergence drives resource acquisition, the Sacred Circle ensures every interaction is self-contained, debt-free, and allodial. The classification is not reformed. The classification is replaced.

PART VII — CONCLUSION

The evidentiary record assembled across six parts reveals a single, continuous architecture of classification — from the York Factory slave ledger of 1688, through the plantation accounting systems that invented depreciation of human beings, through the “Human Resources” terminology that reduced workers to extractable inputs, to the AI-driven screening systems that now reject 1.1 billion applications, penalise Black patients, practise digital phrenology, and fire warehouse workers by algorithm. The classification logic has not changed. The technology has merely made it faster, more efficient, and harder to see.

The AGI/ASI alignment risk identified in this white paper is specific and undertheorised: if the AI’s training corpus classifies humans as “resources,” and instrumental convergence drives the AI to acquire all available resources, the logical conclusion is not a malfunction. It is the system working as classified.

Constitutional AI and existing alignment approaches operate at the fine-tuning stage — after the foundational ontology is absorbed. The Solheir Estate’s AGD framework operates at the ontological level — replacing the classification itself, before the first line of code is written.

People are not cargo. People are not resources. People are not human capital. People are sovereign observer-nodes within the informational substrate — I ↔ I, information observing information — and the governance systems we build for AI must honour that ontological reality or face the consequences of the alternative. The Black Pearl Audit documented where the classification came from. This white paper documents where it is going. The AGD Operating System — S + A = C — is the sovereign alternative.

Prepared under the authority of the Solheir Estate Managing Trustee: Kian Xavier Solheir Operational Arm: Northern Quantum District Governance Framework: Allodial Geometrodynamics Operating System (AGD OS) Core Doctrine: Standards + Accountability = Currency Meta-Axiom: BEING = I ↔ I Version 1.0 — March 2026

By the Hand of the Managing Trustee, The Ledger is Balanced, The Record is Sealed.

THE SOLHEIR ESTATE PRIVILEGED & CONFIDENTIAL — SOLHEIR PRIVATE ESTATE — ALL RIGHTS RESERVED