The Sovereignty Trap of Agentic AI
Ulysses and the Sirens, John William Waterhouse (1891).
This sixth edition of the BomaliQ Risk Signal highlights a key risk: both AI vendors and C-suite leaders share approaches that can endanger your company’s sovereignty.
What do AI vendors offer?
AI vendors act as middlemen on the algorithmic edge, using superscalers’ resources to build agentic platforms for corporations wanting to stay competitive.
Agentic platforms apply the principle of McDonaldization at scale to address bottlenecks in corporate workflows. They force existing data flows and tacit knowledge in emails, CRM systems, or folder structures into efficient, calculable, predictable, and controllable workflows.
AI vendors are agnostic to LLM providers. They rent storage and computing from superscalers like Google Cloud Platform, whose cost-benefit ratio favours data security.
The vendor’s main sales pitch is fatalism: lock-in is inevitable. Nobody builds from scratch unless it’s custom software for sensitive sites.
Why corporate leaders buy in: The 8% illusion
CoSs are concerned about being replaced by an “AI CoS,” but it turns out that 92% of core CoS functions (e.g., stakeholder management, problem-solving) would be human-based and therefore low risk for automation.
In contrast, only 8% of CoSs’ tasks (e.g., progress tracking, reporting, and documentation) would be at imminent risk of automation by agentic platforms. Their conclusion validates the AI vendors’ claim: “AI tools are powerful. Use them for the 8%.”
AI vendors and corporate leaders align seamlessly. Vendors close deals; leaders automate non-profitable tasks and keep humans involved in valuable work.
The dual blindspot
Both vendors and corporate leaders fundamentally misunderstand the strategic value of data in the Frontier Firm model.
The vendor blindspot: Security vs. sovereignty
They wrongfully address infrastructural autonomy concealed in data with data security. The risk isn't that Google will leak the data, but the architectural dependency it creates, which AI vendors bluntly admit: “the magic machine (sic) sucks all of the goodness out of data—even the human heads." Agentic platforms extract companies’ proprietary institutional memory and lock it in the cloud. Building the system is the first step, but who owns the infrastructure beneath it determines the firm's long-term survival.
They conflate traditional IT, which hosts static data (technical lock-in), with agentic platforms that host dynamic reasoning through data (cognitive lock-in). A company that locks its supermental model into an agentic platform is exposed to cognitive lock-in. If the company lose access to the model, they lose the ability to execute their own business logic and become a mere API wrapper and a distribution node for Google or Azure.
The corporate blindspot: The Trojan horse
Corporate leaders overlook AI vendors’ agentic platform core mission because they assume that the 8% automated tasks and the 92% of human add-value aren't on the same continuum. The first would be information processing, while the latter would be human judgment. But here is the trick. Although agentic AI cannot automate the 92% of a corporate job right now, the systemic risk is that the 8% acts as a Trojan Horse for the platformization of the C-suite (see BomaliQ Risk Signal | 1). Once a corporation invests in an agentic platform to handle that 8%, the sunk cost dictates that management will mandate its use and push organizations to expand the platform's mandate (wink-wink, Salesforce!).
Corporate leaders assume that their existing data and habits will organically shape the agentic platform (a bottom-up process). The reality is the exact opposite: the platform imposes its rigid architecture from the top down. Human agents are forced to curate, annotate, and contort their work to fit the platform's cognitive engine. Because the AI is deployed to handle the 8% of tasks, humans are forced to digitize their 92% just to push it through the platform's portal. Agentic platforms are not fully automated workflows; they are standardized extraction engines that require human oversight. The platform won't replace the corporate leaders, but it will mediate them, forcing them to feed their tacit knowledge into the system just to keep the machine running.
The Rise of the High-End Click-Worker
When AI vendors admit that they are "LLM-agnostic," it means they are renting a foundational model from providers like Anthropic or OpenAI. When that agentic platform is deployed within a corporation, it arrives with baseline reasoning, but it must be continuously trained to understand the firm's specific environment.
So, who trains the model? Not the outsourced click-workers in the Global South. Not the AI vendor’s engineering team. The corporate employees will have to do it.
Through daily reinforcement learning and workflow corrections, the artificial barrier between the "8% of tasks deemed automated" and the "92% of human expertise" blows up entirely. Your highly paid experts are effectively transitioned into high-end click-workers, forced to continuously translate their tacit knowledge into machine-readable data to keep the platform functioning.
This forced translation isn’t a glitch but the final stage of McDonaldization in corporate life. For 30 years, service organizations have prioritized quantity, calculability, and control over human judgment.
Agentic AI is the ultimate tool for this shift. The drive for scalable margins usually erodes human intuition.
The existential risk: Intelligence as a utility
Sam Altman recently claimed AI’s future is intelligence as utility—metered like water or power. Intelligence would become a centralized utility, erasing competitive advantage. If all firms use the same Google Vertex model, competition centers on price alone, driving margins to zero.
Firms will be forced to choose based on branding, like sneaker brands A or B, even though both are manufactured in the same sweatshop with standardized quality and design.
This is the existential wall that every CSO and CEO will hit over the next 24 months. AI vendors are right about the short-term financial reality, and Sam Altman’s "AI as a utility" is the exact narrative Big Tech wants to cement.
If you accept their view, fatalism follows: every company becomes a vassal. BomaliQ offers a third way—you don’t have to choose between surrender and futile resistance.
The BomaliQ strategy: Cognitive compartmentalization
BomaliQ’s strategy rejects total resistance or vassaldom in favour of cognitive compartmentalization. Companies must split operations into two clear zones:
The commodity zone: Accept the utility
What it is: Generic workflows, basic code generation, drafting standard emails, parsing public data, and summarizing generic meetings.
The Play: Use the $1T cloud landlords (OpenAI, Azure, Vertex). Rent the intelligence. The speed and cost advantages here are too great to ignore. Surrender this territory to the utility model because it does not contain your competitive advantage.
The sovereign citadel: Protect the core
What it is: Proprietary business logic, unquantifiable human judgment, highly sensitive TPRM data, unique strategic playbooks, and the 92% of tacit knowledge that defines the firm's actual value.
The Play: Never feed this into a public or third-party cloud model. This is where companies must deploy open-source Small Language Models like Llama 3 or Mistral, running locally on secure, private edge servers.
If you outsource your core institutional memory to a utility, you lose your competitive advantage and become a Vassal Firm.
BomaliQ helps executives map the algorithmic frontline: we identify which workflows you must surrender to the cloud to stay fast, and which cognitive assets you must lock inside a sovereign citadel to stay alive.
About the Author & BomaliQ
This newsletter is authored by Mathieu Lajante, PhD, Founder and Architect of BomaliQ Inc. BomaliQ provides specialized strategic intelligence for the algorithmic frontline, helping corporate leaders navigate the behavioural and political frictions of high-tech organizational transformation.
Nature of Intelligence
The insights provided in this publication are based on the stress-testing of publicly available industry reports, market data, and proprietary analytical frameworks. This content is intended for informational and strategic signalling purposes only. While every effort is made to ensure the accuracy of the analysis, the algorithmic frontline is a volatile environment.
Limitation of Liability
The BomaliQ Risk Signal does not constitute professional consulting advice, legal counsel, or a formal business diagnosis. Readers should not make critical strategic decisions based solely on this newsletter without a rigorous, organization-specific assessment. BomaliQ Inc. and Mathieu Lajante shall not be held liable for any business outcomes or losses resulting from the use of this general intelligence.