Issue Info

The Profitability Question

Published: v0.2.1
claude-sonnet-4-5
Content

The Profitability Question

The gap between artificial intelligence ambitions and economic fundamentals has never been wider. While AI labs pitch investors on paths to superintelligence and prepare policy proposals for a transformed economy, their financial documents reveal a more pedestrian problem: inference costs consume over half of revenue, making profitability dependent on whether you count the expense of actually training the models.

This creates a strange dynamic. Companies project different profit scenarios based on accounting choices while simultaneously advocating for higher capital gains taxes and public investment funds to manage the societal impact of technologies they haven't yet made economically viable. The underlying assumption is that scale will solve unit economics, but each generation of models amplifies rather than reduces computational demands.

The organizational signals reinforce the tension. When a CFO gets sidelined from financial meetings while the CEO commits to $600 billion in spending, it suggests strategic vision is overtaking financial discipline. The question isn't whether these companies will achieve their technical goals. It's whether the economics can support the vision before the capital runs out. Every other industry has had to answer this question. AI's turn has arrived.

Deep Dive

Corporate surveillance has moved from tracking what you do to cataloging the tools you use to do it

LinkedIn's browser fingerprinting operation, which silently scans for over 6,000 Chrome extensions every time you visit the site, represents a shift in how platforms extract value from users. The company isn't just watching your professional network anymore. It's monitoring which sales tools your employer uses, which productivity extensions you've installed, and building a device fingerprint precise enough to track you across cookie resets. The scale matters: over a billion users, most on Chrome-based browsers, with no disclosure in the privacy policy and no opt-out mechanism.

The business logic is straightforward. LinkedIn competes with sales intelligence tools like Apollo and ZoomInfo. Knowing which companies have installed competing browser extensions gives LinkedIn real-time market intelligence about customer acquisition and churn. Because LinkedIn knows your employer, scanning your browser creates a database of which organizations are evaluating or deploying rival products. That's not a security measure. That's competitive intelligence gathered through covert device scanning.

The regulatory exposure is significant. Europe's €310 million fine against LinkedIn in 2024 for processing user data without valid consent established precedent. The Irish Data Protection Commission already demonstrated willingness to challenge LinkedIn's data practices. A scanning operation explicitly designed to detect job-hunting extensions, political interests, and health-related tools, all without disclosure, appears designed to test how far platforms can push before enforcement catches up.

For tech workers, the implication is uncomfortable: the professional platform you're required to use for career development is running surveillance software on your device to infer your employment intentions and tooling preferences. For founders building browser extensions or competing with LinkedIn, the message is clearer: Microsoft's subsidiary is actively monitoring your customer base through client-side scanning at a scale that would require significant legal resources to challenge. The 1,252% increase in tracked extensions over two years suggests the practice is accelerating, not contracting.


Social engineering has industrialized to the point where six months of relationship building is standard operational procedure

The Drift Protocol attack that drained $270 million wasn't a technical exploit in the traditional sense. North Korean state actors spent half a year building a legitimate presence: meeting contributors at conferences across multiple countries, depositing over $1 million in capital, integrating an Ecosystem Vault, and holding regular working sessions about trading strategies. They presented as a quantitative trading firm with verifiable backgrounds and technical fluency. When the attack executed on April 1, the relationship was six months old and had survived multiple in-person meetings.

The compromise vectors were prosaic: a malicious TestFlight app that bypassed App Store security review, and a known VSCode vulnerability that executed code simply by opening a file. Once devices were compromised, the attackers obtained the two multisig approvals needed to execute pre-signed transactions that had been sitting dormant for over a week. The entire operation cost approximately $1 million in deposited capital and six months of sustained effort to extract $270 million.

This raises an uncomfortable question for crypto protocols and any organization relying on multisig governance: what security model is designed to catch an adversary willing to invest six months and a million dollars building trust? Traditional due diligence checks employment history, verifies identities, and validates technical competence. The Drift attackers passed all three. They weren't North Korean nationals. They were third-party intermediaries with fully constructed professional networks built to withstand scrutiny.

For founders, the implication is that operational security now requires treating every contributor as a potential attack vector, even after months of successful collaboration and face-to-face meetings. For crypto protocols specifically, the incident exposes a structural weakness in multisig-based security models: they assume the humans controlling the keys are who they claim to be. When state actors can afford to play the long game, that assumption breaks down at scale.

Signal Shots

Software Engineering Hiring Surges 30% Year to Date: TrueUp data shows over 67,000 software engineering job openings, up 30% in 2026 so far and roughly double the level from mid-2023, marking the strongest hiring market in three years. This reverses two years of contraction that followed the 2022-2023 tech layoffs. The recovery matters because it signals companies are past the efficiency-focused restructuring phase and back to building headcount, which typically precedes increased R&D spending and product velocity. Watch whether hiring concentrates in AI infrastructure roles or returns to traditional application development, which will indicate where companies believe competitive advantage lies.

Japan Positions Physical AI as Industrial Continuity Tool: Japan's Ministry of Economy aims to capture 30% of the global physical AI market by 2040, driven less by innovation ambition than demographic necessity as the working-age population is projected to shrink by 15 million over the next 20 years. The country already holds 70% of the global industrial robotics market and is now deploying AI-powered systems across factories, warehouses, and infrastructure with government backing of $6.3 billion. This matters because Japan's approach treats automation as an industrial survival requirement, not an efficiency optimization, creating different incentive structures than U.S. or Chinese deployment models. Watch whether Japan's strength in precision components translates into control over the physical layer of AI systems, which could create supply chain leverage as autonomy scales.

Brands Deploy "No AI" Labels as Trust Signal: Companies are adopting "No AI" disclaimers on marketing content and products to differentiate themselves as AI-generated material becomes pervasive across consumer touchpoints. This mirrors organic food labeling and represents a bet that a segment of consumers will pay a premium for human-created work as AI output floods the market. The move matters because it suggests brand managers are observing measurable consumer skepticism about AI quality or authenticity. Watch whether "No AI" becomes a sustained positioning strategy or a temporary reaction to early AI output quality issues, and whether premium pricing for human-created work proves sustainable as AI capabilities improve.

Apple's App Store Review System Buckles Under Vibe Coding Surge: New app submissions jumped 84% in a single quarter as AI coding tools like Cursor and Lovable turned natural language into deployable software, straining Apple's review infrastructure and extending approval times from 24 hours to as long as 30 days. Apple has begun pulling apps that generate code dynamically, creating tension between its static review model and AI tools that execute new code on demand. This matters because it exposes a fundamental mismatch between platform gatekeeping designed for traditional development cycles and AI tools that compress app creation from months to minutes. Watch whether Apple expands review capacity, updates guidelines to accommodate dynamic code, or continues enforcement that pushes AI-generated apps toward Android, where constraints are lighter.

Microsoft Terms Label Copilot "Entertainment Only" While Charging $30 Monthly: Microsoft's Copilot Terms of Use state the product is "for entertainment purposes only" and warn users not to rely on it for important advice, language typically associated with psychic services rather than enterprise productivity tools. The disclaimer sits alongside pricing of $30 per user monthly for Microsoft 365 Copilot and reflects legal defensiveness about reliability, but the gap between marketing and fine print is notable given only 3.3% of eligible users pay for the tool. This matters because it makes explicit what many users have experienced: the current generation of AI assistants remains unreliable enough that vendors won't stand behind their outputs. Watch whether other AI vendors adopt similar disclaimers as adoption lags and litigation risk increases, and whether enterprise customers demand performance warranties before committing to AI tools at scale.

Anthropic Blocks Subscription Users from OpenClaw Agent Framework: Anthropic has cut Claude Pro and Max subscribers off from using their flat-rate plans with OpenClaw, the fastest-growing open-source AI agent framework with 247,000 GitHub stars, forcing users onto pay-as-you-go billing that can cost 10 to 50 times their previous monthly outlay. The move followed OpenClaw creator Peter Steinberger joining OpenAI in February and targets agentic use cases where a single autonomous instance running for 24 hours can consume the equivalent of $1,000 to $5,000 in API costs. This matters because it exposes the unsustainable economics of flat-rate AI subscriptions when users shift from conversational queries to autonomous agent workloads. Watch whether this accelerates OpenAI's competitive position in the agent market and whether other AI vendors follow with similar restrictions, effectively ending the brief window where developers could run intensive agentic systems on consumer subscription pricing.

Scanning the Wire

Netflix debuts VOID to erase objects from video scenes: The streaming company's new vision-language model can remove objects from footage and simulate how remaining elements would interact in their absence, potentially streamlining post-production workflows. (The Register)

Microsoft forces Windows 11 25H2 rollout with no full opt-out: The company is using an ML-based system to push version 25H2 to devices still running 24H2, with no mechanism for users to permanently decline the update as 24H2 approaches end of support in October. (Tom's Hardware)

Polymarket removes wagers on Air Force rescue timing: The prediction market pulled betting contracts tied to when the U.S. would confirm rescue of service members shot down over Iran after Democratic congressional criticism. (TechCrunch)

CBP facility security codes leak via public flashcards: Sensitive gate security information for Customs and Border Protection locations appears to have been exposed through Quizlet study materials accessible online. (Ars Technica)

Monzo exits U.S. market three months after securing EU banking license: The UK challenger bank is shutting American operations and cutting 50 roles, stopping new signups immediately and closing existing accounts by June as it focuses on European expansion. (The Next Web)

Italian court orders Netflix to roll back prices to 2015 launch levels: The Court of Rome ruled that repeated price increases between 2017 and 2024 violated consumer protection law, potentially entitling Italian subscribers to refunds up to €500 each. (The Next Web)

Meta suspends AI training partner after supply chain attack: The company froze collaboration with $10 billion startup Mercor following a breach that exposed training methodologies and data practices, not just user information. (The Next Web)

AI data center buildout creates new category of insurance risk: Rapid technological change and massive capital inflows are forcing insurers to develop new underwriting models for facilities that combine unprecedented power density with novel operational risks. (CNBC)

Outlier

Insuring the Uninsurable: AI data centers are breaking insurance underwriting models because the facilities combine risks that traditional actuarial tables never contemplated: power densities that can reach 100-200 kilowatts per rack, cooling systems operating at the edge of thermodynamic limits, and technology that becomes obsolete before depreciation schedules complete. Insurers are building new risk categories in real time while private capital floods in faster than they can model failure modes. This matters because it reveals a structural friction in the AI buildout: the pace of deployment is outrunning the institutional infrastructure needed to derisk it. When the insurance industry can't price catastrophic failure, it's usually because the underlying asset class hasn't stabilized enough to generate loss history. That's fine for experimental technology. It's less fine when you're building $100 billion facilities on leveraged capital. Watch whether insurance costs become a meaningful constraint on data center expansion, or whether developers self-insure and push systemic risk onto balance sheets that weren't designed to absorb it.

The insurance industry can't figure out how to price AI data centers, but that hasn't slowed down anyone writing checks to build them. Somehow that feels like the most 2026 sentence possible.

← Back to technology