Closed Source AI = Neofeudalism
6.3 Key Insight: The danger of closed-source AI is not malicious intent but structural inevitability: without deliberate decentralization, a few institutions will become permanent feudal custodians of machine intelligence.
George Hotz argues that closed-source AI development is structurally trending toward a new form of feudalism, where a handful of labs and cloud providers become permanent custodians of machine intelligence. He acknowledges that many people in frontier AI labs have honorable motives, but contends the institutional form itself drives concentration of compute, talent, and political legitimacy. Rather than advocating recklessness, he calls for a 'free technical order' that distributes AI capability broadly. The post lays out concrete principles: multiple model lineages, open and auditable tools, local inference, commodity hardware, and rights to inspect, fork, and refuse. He frames this not as an anti-safety position but as an anti-feudal one, insisting that no single entity has earned the right to curate the future of intelligence.
8 No company, government, or epistemic clique has earned the right to unilaterally curate the future of mind.
8 This is not anti-safety. It is anti-feudal.
7 This model may be described as responsible, safe, or pragmatic. But in institutional terms it amounts to custodial intelligence: a world in which extraordinary cognitive power is r…
AI & MLSociety