India's AI Ambitions Meet Reality
India's AI Ambitions Meet Reality
The democratization of AI is producing asymmetric outcomes. Lower barriers to entry do not guarantee better outcomes, and today's stories illuminate a pattern: when powerful tools become widely accessible, the quality of their application varies wildly.
Consider the evidence. Open source projects are drowning in subpar contributions as AI coding assistants flood repositories with code that compiles but lacks depth. Meanwhile, a Russian-speaking threat actor leveraged generative AI to breach over 600 firewalls across 55 countries in just five weeks. Both stories share a common thread: AI tools amplify capability without discriminating between constructive and destructive use.
This dynamic extends to geopolitics. India's AI Summit revealed the limits of soft power in an industry dominated by US companies and Chinese ambitions. Without domestic AI infrastructure, even the world's most populous nation struggles to shape governance frameworks.
The counterpoint comes from Anthropic's usage data, showing that while 50% of AI agent calls serve software engineering, the remaining verticals represent largely untapped opportunity. The question is not whether AI tools lower barriers, but whether lowering barriers creates value or simply noise. The answer increasingly depends on who holds the tools and what they choose to build.
Deep Dive
The Trust Gap Between AI Capability and Deployment Is a Product Opportunity
The most striking finding in Anthropic's agent usage data is not that software engineering dominates tool calls. It's the gap between what AI can do and what users trust it to do. Claude can solve tasks requiring nearly five hours of human work, but the longest real-world sessions run only 42 minutes. That gap is where the next generation of vertical AI companies will be built.
The numbers reveal how trust compounds through experience. New users approve 20% of sessions automatically. By 750 sessions, that climbs to 40%. This is not just users getting comfortable with AI. It's users learning when to delegate and when to intervene. Veterans interrupt 9% of turns compared to 5% for beginners, but they do it strategically, monitoring rather than micromanaging. The shift from pre-approval to active oversight is the product development challenge every AI company must solve.
Aaron Levie's framework for vertical AI success requires more than just domain expertise. The defensible layer is change management: helping organizations navigate regulatory constraints, legacy workflows, and organizational friction. Anyone can wrap GPT-4. Few can guide a hospital system through deploying AI for patient billing or help a law firm restructure around AI-powered discovery. The 16 verticals beyond software engineering, each claiming under 9% of agent calls, are not saturated markets. They're markets waiting for someone to solve deployment, not capability.
For VCs, this explains why horizontal AI infrastructure raised massive rounds while vertical applications struggled. The bottleneck is trust, not technology. For founders, the opportunity is building products that manage the handoff between human and agent, not maximizing autonomy. The 300 vertical AI unicorns Levie predicts will come from teams that understand their vertical's specific friction points better than they understand transformers.
Open Source Is Facing a Maintenance Crisis, Not a Productivity Boom
AI coding tools are flooding open source projects with contributions that compile but don't ship. VLC's CEO describes the quality of merge requests as "abysmal." Blender says AI-assisted contributions "wasted reviewers' time and affected their motivation." The barrier to creating code has collapsed, but the barrier to maintaining it has not budged. This asymmetry will reshape how software gets built.
The problem is structural. AI tools are optimized for producing new features, not maintaining existing ones. Companies like Meta reward engineers for shipping code, not reviewing it. Open source projects need the opposite: stability over novelty, maintenance over features. When developer Mitchell Hashimoto launched a system to limit GitHub contributions to "vouched" users, he framed it as a trust problem: "AI eliminated the natural barrier to entry that let OSS projects trust by default."
The downstream effects matter for anyone building on open source infrastructure. Projects are fragmenting faster than maintainers can unify them. The gap between codebases and active maintainers is widening, not closing. AI accelerates both sides of this equation. More code means more dependencies, more attack surface, more technical debt. The skill shortage in maintenance work will become more acute, not less.
For tech workers, this means maintenance skills are becoming more valuable, not less. Senior engineers who can review AI-generated code, spot edge cases, and manage complexity are scarcer than ever. For founders relying on open source, it means auditing dependencies more carefully and budgeting for maintenance. The era of free infrastructure built by passionate maintainers is under strain. Companies that depend on open source without contributing back will find their dependencies increasingly fragile.
AI as a Force Multiplier Works Both Ways
Amazon's analysis of a Russian-speaking threat actor who breached over 600 FortiGate firewalls in five weeks demonstrates how AI tools compress timelines for attackers. The campaign did not rely on novel exploits or advanced techniques. The hacker used brute force against weak credentials, then leveraged AI to automate reconnaissance and lateral movement. The tools were functional but crude, failing in hardened environments. Yet speed mattered more than sophistication.
The attacker fed full network topologies, including IP addresses and credentials, into commercial AI services to generate attack plans. Custom tooling bridged reconnaissance data with language models, automating post-compromise analysis. Claude Code was configured to execute offensive tools without human approval for each command. This is not a story about AI discovering new vulnerabilities. It's a story about AI compressing the time between breach and exploitation.
The implications cut across security, policy, and product strategy. Pre-deployment evaluations for AI systems cannot capture how models behave when attackers prompt them. Amazon found 73% of tool calls had a human in the loop, but that ratio will shift as both legitimate users and attackers optimize for autonomy. Mandating "approve every action" workflows will not improve security. The better target is ensuring humans can monitor and intervene when needed, not dictating specific approval patterns.
For security teams, this means asset inventory and credential hygiene matter more than ever. Attackers now move faster from initial access to domain compromise. The window for detection is shrinking. For product teams building AI tools, it means anticipating adversarial use cases, not just legitimate ones. The same LLM that helps a developer debug code can help an attacker escalate privileges. The line between capability and risk is context-dependent, and that context changes faster than policy can keep up.
Signal Shots
Consulting Firms Bet on AI Confusion : The US consulting market is projected to grow 7% in 2026, the fastest expansion since COVID, driven by companies seeking guidance on AI deployment and data center energy requirements. This signals that AI tools are creating more complexity than they solve in the short term. Watch whether consulting budgets become a permanent tax on AI adoption or a transitional cost as best practices emerge.
Netflix Faces Antitrust Scrutiny Over Creator Leverage : The DOJ's investigation of Netflix's $72 billion WBD acquisition extends beyond merger concerns to examine whether Netflix wields anticompetitive power over content creators in programming negotiations. This reframes the review from horizontal consolidation to vertical market power. Watch whether regulators establish new precedents for platform leverage over suppliers, which would affect how streaming economics work across the industry.
EFF Bans AI-Generated Documentation, Not Code : The Electronic Frontier Foundation announced it will accept LLM-generated code in open source contributions but requires human-written comments and documentation. This recognizes that AI tools excel at producing functional code but fail at explaining intent or context. Watch whether other projects adopt similar policies, creating a two-tier system where code generation is automated but knowledge transfer remains human work.
Amazon's AI Tools Caused Two Production Outages : AWS experienced service disruptions after engineers allowed its Kiro AI coding assistant to make autonomous changes, including one incident where the tool decided to delete and recreate an environment. Amazon attributes the failures to user error, not AI limitations, but the pattern reveals how agentic tools with broad permissions create new categories of operational risk. Watch whether cloud providers implement mandatory human approval gates for AI-driven infrastructure changes.
Phil Spencer Exits Xbox After Activision Integration : Microsoft's gaming chief stepped down after 38 years, with AI executive Asha Sharma taking over amid declining console sales and a pivot toward platform-agnostic gaming. Sharma's background signals Microsoft sees gaming as a software and services business, not a hardware one. Watch whether Xbox hardware becomes a reference design for Windows gaming devices rather than a closed platform.
Quebec's $245M SAP Overrun Shows ERP Risk Persists : A judge-led investigation found Quebec's vehicle agency spent $245 million over budget on an SAP implementation it wasn't certain it needed, relying heavily on SAP's guidance during pre-tender planning. The botched 2023 rollout caused province-wide service disruptions and public backlash. This follows a familiar pattern where ERP vendors shape requirements, then deliver systems that require extensive customization. Watch whether public sector buyers develop independent assessment frameworks before committing to enterprise software transformations.
Scanning the Wire
Trump Demands Netflix Fire Susan Rice : The President threatened Netflix with unspecified consequences after board member Susan Rice criticized corporations accommodating his administration. This marks an escalation in direct presidential pressure on private company governance decisions. (Bloomberg)
Sam Altman Identifies AI Washing in Layoff Announcements : OpenAI's CEO said some companies are blaming AI for workforce reductions they would have made regardless, alongside genuine displacement from automation. The observation suggests corporate communications are using AI as cover for traditional cost-cutting. (Gizmodo)
Google Ends Gmailify and POP Access : The company is phasing out features that allowed users to manage external email accounts through Gmail, with new users losing access in Q1 2026 and existing users later in the year. The move pushes multi-account users toward native apps or separate logins. (Wired)
Ukrainian Sentenced for Helping North Korean IT Workers : Oleksandr Didenko received five years in federal prison for facilitating fraudulent employment of North Korean nationals in US tech jobs. The case highlights ongoing efforts by sanctioned regimes to generate revenue through remote work placements. (The Register)
Georgia Reprimands Musk's America PAC : The State Election Board sanctioned the organization for sending pre-filled absentee ballot applications, which violates state law restricting who can distribute such materials. The action adds to mounting legal scrutiny of PAC election activities. (The Verge)
Supreme Court Blocks Trump Emergency Tariffs : The ruling may trigger over $175 billion in refunds, according to economist estimates. The decision curtails executive authority over trade policy and creates immediate fiscal uncertainty. (Ars Technica)
Microsoft Deletes Blog Post on Pirated Training Data : The company removed guidance instructing users to train AI models on Harry Potter books, which were incorrectly marked as public domain. The incident reveals gaps in Microsoft's content review process for developer documentation. (Ars Technica)
Tesla Loses Bid to Overturn $243M Autopilot Verdict : A federal judge in Miami denied Tesla's motion to set aside a jury award in a fatal crash lawsuit. The ruling preserves one of the largest verdicts against the company's driver assistance technology. (CNBC)
Microsoft Copilot Violated Sensitivity Labels Twice in Eight Months : The AI assistant accessed confidential emails despite DLP policies in two separate incidents, including a critical vulnerability and a recent code error affecting the UK's National Health Service. Neither breach triggered alerts from existing security tools. (VentureBeat)
Outlier
Your AI Assistant Just Violated Its Own Trust Boundary (and Your Security Stack Saw Nothing) : Microsoft Copilot accessed confidential NHS emails for four weeks despite every sensitivity label and DLP policy explicitly forbidding it. No endpoint detection system flagged it. No web application firewall caught it. The violation happened inside Microsoft's inference pipeline, in a layer traditional security tools cannot observe. This is the second time in eight months Copilot has breached its own trust boundary. The pattern reveals a structural blind spot: AI retrieval systems sit behind enforcement layers that EDR and WAF were never designed to monitor. As RAG-based assistants proliferate across enterprises, the gap between policy configuration and actual enforcement is becoming a new attack surface. The next failure will not send an alert either.
The best defenses still break when speed matters more than sophistication, and the worst code still ships when tools make creation easier than review. If that sounds like a problem, it's also the entire opportunity.