Issue Info

When Founders Feud

Published: v0.2.1
claude-sonnet-4-5
Content

When Founders Feud

The tech industry is discovering that institutions have veto power. While founders spent the past decade consolidating control and moving fast, 2026 is shaping up as the year when governments, courts, and physical reality impose limits that capital alone cannot overcome.

The pattern runs across geographies and domains. Musk is testifying under oath about the OpenAI origin story, transforming founder mythology into legal liability. China is blocking Meta's Manus acquisition and freezing new robotaxi licenses after Baidu's traffic meltdown, asserting regulatory authority over AI development. Even the Justice Department is prosecuting a former FBI director over a social media post, testing the boundaries of speech in the Trump era.

These are not isolated incidents. They signal a phase change in how technology companies operate. The constraints are both sudden (regulatory freezes, criminal indictments) and structural (nuclear waste management for AI data centers). What made tech exceptional was its ability to scale faster than institutions could respond. That window is closing.

The question is not whether tech faces more friction, but whether companies can adapt their operating models before courts and governments impose solutions. The Musk trial offers a preview: what founders said privately to each other now matters in public proceedings. The informal era is over.

Deep Dive

Founder Testimony Creates New Legal Risk for Early-Stage Companies

Elon Musk's testimony in his lawsuit against OpenAI introduces a liability that most early-stage companies have not priced in: informal founder agreements and casual conversations can become sworn evidence in future litigation. Musk testified under oath about his falling-out with Larry Page over AI safety, a story previously confined to biographies and podcasts. The shift from narrative to testimony matters because courts treat the two differently.

For founders and VCs, the implication is straightforward. The informal governance structures that defined early-stage tech companies carry legal weight that compounds over time. Board meeting minutes that were never taken, verbal agreements about equity splits, late-night conversations about mission and control can all surface in litigation years later. The OpenAI case demonstrates that even friendship dynamics become relevant when billions of dollars and corporate control are at stake.

This creates a practical challenge for early-stage companies: formalize early or accept that informality is a future liability. The problem is timing. Over-formalizing too early can slow decision-making and introduce rigidity that kills startups. But the cost of informal governance rises exponentially with company value. OpenAI went from nonprofit research lab to one of the world's most valuable private companies in less than a decade. Most governance structures did not scale at that speed.

The second-order effect is that founder disputes, previously settled through negotiation or settlement, are increasingly playing out in public court proceedings. That changes the dynamics of founder breakups. When Musk recruited Ilya Sutskever from Google in 2015, it ended his friendship with Page. Now that decision is being examined under oath, with transcripts available to competitors, employees, and future business partners. The reputational cost of litigation now extends beyond the immediate parties.

China's Manus Block Signals End of Regulatory Arbitrage for AI Startups

China's decision to block Meta's acquisition of AI startup Manus exposes a fundamental miscalculation by cross-border AI companies: relocating headquarters to evade regulatory scrutiny no longer works. Manus moved from China to Singapore to access capital and escape Beijing's oversight. Chinese regulators responded by asserting jurisdiction anyway and unwinding the transaction.

For AI startups originating in China, this closes a window that many assumed would remain open. The regulatory arbitrage strategy, which worked for previous generations of internet companies, fails when governments treat AI as strategic infrastructure. Both Washington and Beijing now view domestically controlled AI as a national security asset. That means restrictions on foreign investment, export controls on chips, and now retroactive vetoes of acquisitions involving relocated companies.

The immediate impact hits AI M&A. Meta presumably conducted due diligence on Manus, negotiated a price, and believed the transaction would close. Four months later, Chinese regulators killed the deal. For acquirers, this introduces a new category of risk: even if the target company relocated and legally operates outside China, if the founding team and core technology originated in China, Beijing can block the transaction.

For VCs, the calculus on funding Chinese AI startups shifts. Exit opportunities narrow when the two largest potential acquirers, US and Chinese tech giants, face regulatory barriers to cross-border deals. That leaves IPO as the primary path, which takes longer and succeeds less frequently than acquisition. The result is likely lower valuations for Chinese AI startups, even those with promising technology.

The broader pattern is fragmentation. Rather than a global AI market, we are heading toward regional AI ecosystems with limited interoperability. Companies will need to pick a regulatory jurisdiction early and build for that market. The era of building in one country, relocating for capital, and selling to acquirers anywhere is over.

AI's Nuclear Waste Problem Is Already Here

The surge in AI data center construction has exposed an infrastructure constraint that most of the industry ignored: nuclear waste disposal. Tech companies are signing deals for nuclear power to meet AI electricity demand, but the US still lacks a permanent storage solution for radioactive waste nearly 70 years after the first commercial reactor came online. This is not a future problem. It is a current bottleneck that compounds with every new reactor.

For tech executives funding nuclear projects, the waste question is unavoidable. Nuclear reactors produce roughly 2,000 metric tons of high-level waste annually in the US alone. That material is currently stored in temporary facilities at reactor sites, in pools and concrete casks designed for decades, not centuries. The longer-term solution, deep geological repositories, exist in planning documents but not in operational reality. Finland is closest to opening its facility this year. The US designated Yucca Mountain in Nevada as a storage site in 1987, but political opposition has stalled progress for over a decade.

The mismatch between AI's growth rate and nuclear waste infrastructure creates a timing problem. AI companies need power now. Nuclear projects take years to build. Waste disposal solutions take decades to implement. The result is a growing inventory of radioactive material with no permanent home, stored at sites that were never meant to be final destinations.

For investors evaluating nuclear-powered AI infrastructure, this is a balance sheet risk that will eventually surface. Either companies fund waste disposal themselves, building the geological repositories that governments have failed to complete, or they accept regulatory limits on new reactor construction once temporary storage reaches capacity. Both outcomes are expensive.

The pattern mirrors other infrastructure bottlenecks in AI: transformer capacity, water supply for cooling, transmission lines. Scaling AI requires physical resources that do not scale at software speeds. The nuclear waste problem is simply the most radioactive version of a broader constraint. Tech's answer so far has been to assume someone else will solve it. That assumption is breaking down.

Signal Shots

Google Fills Pentagon's AI Gap After Anthropic's Exit: Google signed a contract granting the Defense Department access to its AI for classified networks after Anthropic refused terms that would allow domestic mass surveillance and autonomous weapons use. The deal includes language discouraging those applications, similar to OpenAI's contract, but enforceability remains unclear despite 950 Google employees signing an open letter opposing unrestricted military AI sales. This completes a rapid vendor swap where Anthropic's principled stand created immediate business opportunities for competitors willing to accept Pentagon terms with looser constraints. Watch whether contract language proves enforceable or merely performative, and if employee pressure affects implementation.

EU Forces Google to Open Android's AI Layer: The European Commission is requiring Google to give rival AI services the same deep Android access as Gemini, including app interaction, task execution, and custom wake word activation. Brussels argues AI is now central to how users interact with devices, making interoperability critical for competition. Google counters that Android is already open by design and mandated access undermines security while driving up costs. The consultation runs through May 13. This extends DMA enforcement beyond app stores into the operating system AI layer, establishing that platform holders cannot reserve system-level AI capabilities for their own services. Watch how Google implements access without compromising security claims, and whether Apple faces similar EU pressure for iOS.

GitHub Reliability Crisis Goes Public: GitHub fixed a critical vulnerability in under six hours after AI-assisted discovery by Wiz Research, but the incident highlights broader platform instability. HashiCorp co-founder Mitchell Hashimoto announced he is moving his Ghostty project off GitHub after experiencing outages nearly every day for a month, calling it "no longer a place for serious work." Hashimoto, GitHub user 1299 from 2008, says he wants to ship software but GitHub blocks him for hours daily. The timing coincides with Microsoft's admitted Windows quality problems and AI overreach. Watch whether other major projects follow Hashimoto's exit and how Microsoft responds to its flagship developer platform becoming unreliable during its AI transformation push.

GoDaddy Handed Over 27-Year Domain Without Authentication: GoDaddy is investigating claims it transferred complete control of a client's 27-year-old domain to another customer in four minutes without requiring authentication or supporting documents. The Pennsylvania IT firm managing the domain documented 32 phone calls over five days trying to recover it, with the nonprofit client losing website and email access. The transfer was apparently triggered by an internal user who mistakenly identified the wrong domain based on an email signature. The recipient, realizing the error, voluntarily returned control. This reveals catastrophic domain registrar security failures where authentication protections can be bypassed by internal processes. Watch for class action formation and whether this forces industry-wide registrar security audits.

Justice Department Targets Cloudera Over PERM Violations: The DOJ charged Cloudera with discriminating against US workers by routing American job applications to a non-functional email address while sponsoring foreign workers through the PERM program for those same positions. Cloudera allegedly maintained a separate hiring process that dumped US candidate applications, then certified to the Labor Department it could not find qualified American workers for at least seven roles between 2024 and 2025. The case follows similar DOJ enforcement against Apple, which settled for $25 million in 2023. This signals sustained federal scrutiny of tech's visa practices under expanded labor certification requirements. Watch whether this becomes pattern enforcement across the industry or remains limited to egregious cases with clear evidence trails.

Scanning the Wire

UAE to Leave OPEC: The United Arab Emirates is exiting the oil cartel after 47 years of membership, marking the first voluntary departure by a major producer since Qatar in 2019. (Hacker News)

GitHub Moves to Usage-Based Copilot Pricing: GitHub says it can no longer absorb escalating inference costs from its heaviest AI users and will begin charging based on actual usage rather than flat subscriptions. (Ars Technica)

Apple Adds Annual Commitment Tier for App Subscriptions: Apple is letting developers offer lower monthly pricing in exchange for 12-month commitments, creating a new retention tool for subscription apps. (TechCrunch)

Match Group Invests $100M in Gay Cruising App Sniffies: The dating conglomerate is backing the location-based hookup app as its newest attempt to recapture mobile users losing interest in traditional dating platforms. (TechCrunch)

General Motors Bringing Gemini to Four Million Vehicles: GM will roll out Google's AI assistant via over-the-air updates to model year 2022 and newer vehicles with Google built-in across Cadillac, Chevrolet, Buick, and GMC brands. (The Verge)

True Anomaly Raises $650M for Space Defense Systems: The four-year-old startup plans to scale manufacturing of space interceptors for the Trump administration's Golden Dome program and double its workforce by year-end. (CNBC)

Goldman Sachs and Bain Back AI Marketing Startup Hightouch: The Trade Desk also invested in the funding round, valuing Hightouch at $2.75 billion as enterprise marketing tools adopt AI-powered customer data activation. (WSJ)

Scout AI Raises $100M to Train Combat AI Agents: Coby Adcock's startup is developing AI systems that give individual soldiers control of autonomous vehicle fleets, with training conducted at dedicated bootcamp facilities. (TechCrunch)

Humanoid Robots Begin Luggage Sorting Tests in Tokyo: Haneda Airport is testing humanoid robots for cargo loading and cabin cleaning as Japan's labor shortage pushes airports toward automation. (Ars Technica)

Pitney Bowes Hit by 8.2M Email Address Leak: ShinyHunters claims responsibility for data dump including names, phone numbers, and physical addresses from the logistics technology company, adding another victim to its ongoing campaign. (The Register)

Australia Forces Big Tech to Pay for News or Face Tax: Platforms must strike deals with media outlets or pay up to 2.25% tax on Australian revenue, with the rate dropping to 1.5% if sufficient agreements are reached. (TechCrunch)

SAP API Policy Locks Out Third-Party AI Tools: SAP is prohibiting use of its APIs to integrate with AI systems outside its endorsed architectures, raising concerns the policy will push customers and partners toward undocumented APIs. (The Register)

Outlier

SAP's API Lockdown Reveals the New Enterprise Power Play: SAP is blocking third-party AI systems from accessing customer data through its APIs unless they use SAP-approved architectures. This is not about security or performance. It is about control. When enterprise software vendors realize AI sits between their customers and their data, APIs stop being integration points and become chokepoints. The incentive structure is clear: let customers export data to AI tools they control, or force AI tools to integrate through vendor-approved channels where the vendor takes a cut. SAP is betting customers will pay for approved integrations rather than risk undocumented API workarounds. If this works, expect Oracle, Salesforce, and Workday to follow with similar policies. The API economy was built on openness. The AI economy is rebuilding it around tollbooths.

The nice thing about institutions imposing limits is that someone else does the hard work of saying no. Founders can go back to building things instead of pretending they want to regulate themselves.

← Back to technology