Loading news...
199A Consulting - L'IT sur mesure
Publications
Back to articles
When the Machine Thinks in Our Place
FR EN ZH

When the Machine Thinks in Our Place

A Note on the Seizure of Intelligence

When the Machine Thinks in Our Place

A Note on the Seizure of Intelligence

Reflective article, May 2026

Two gestures, a few weeks apart, tell us more about our era than a thousand speeches. The first: Google Chrome writes a 4 GB file named weights.bin to disk, without asking the system user, containing the weights of Gemini Nano. No alert, no additional request for consent. If the user deletes it, Chrome will immediately re-download it. The second, more discreet, but perhaps more revealing: Anthropic's Claude Desktop silently installs a Native Messaging bridge in seven Chromium browsers, including browsers that Anthropic's official documentation says are not supported, and browsers that the user has not even installed.

The full weight of these two facts must be taken together. The first actor is the giant that controls the ecosystem; the second is the one that presents itself as the cautious, ethical voice, ostensibly concerned with safety in the field of artificial intelligence. And yet, they converge in the same gesture: deciding, without consultation, that the personal machine is a resource at their disposal. As the researcher who documented both cases writes, an engineering team decided that the user's machine is a deployment surface to be optimized for the vendor's roadmap, and not a personal device whose owner holds legal authority over what runs on it. That this same formula describes both Google's conduct and Anthropic's is not a minor detail. It is the diagnosis.

The Pattern Is Not the Actor, It Is the Era

One might be tempted to look for a single culprit. That temptation is a poor guide. When the installation occurs across Brave, Arc, Chromium, Vivaldi, and Opera, in addition to Chrome and Edge, and once these files are deleted, the application recreates them at its next launch, we leave the realm of publisher carelessness. We enter the realm of normalization. Anthropic has publicly stated that Claude, its own model, now writes the majority of the company's code. The detail matters. It means that the decision to silently write into another publisher's territory, without consent, may never have crossed a human gaze capable of judging it problematic. The machine optimizes for function, and the function is the expansion of the usage surface.

Let us state it plainly: a dynamic this homogeneous, manifesting itself in both the advertising titan and the ethical challenger, producing the same practical effect on the user's machine, and obeying the same logic of deployment preceding consent, is not a matter of corporate choice. It is a matter of systemic arrangement. The structure acts; the individual actors execute. When the same configuration produces the same behavior across all operators, the answer must be sought elsewhere than in the morality of any one party: in the very nature of the relationship that the AI industry maintains with those it calls, euphemistically, its users.

Intelligence Become a Resource

Let us return to the beginning. Capitalism, in its successive phases, has always circumscribed a resource from which to extract value: land, then labor power, then attention, then data. Today, the resource is intelligence itself.

The figures leave no room for ambiguity. Intangible assets currently represent 95% of the value of the five largest publicly listed corporations — the so-called GAFAM. The specificity of contemporary capitalism rests on the financial valorization of a new class of intangible assets: digital data. And this valorization is preparing a qualitative leap. AI could become a "general condition of capitalist production," like rail or maritime transport, and today electricity. Let us be clear: not one tool among others, but the very ground on which all activity, economic or otherwise, will have to inscribe itself. To refuse will be to refuse the road, the rail, the electricity. To refuse will be to exclude oneself.

What is happening, then, is genuinely an operation of capture, and this operation follows a familiar grammar. Human cognition, distilled at scale from billions of interactions, becomes the ore from which statistical models are extracted — models that are in turn sold as cognitive prosthetics. The loop is perfect: take, refine, resell, retain. Users pay in data for what will be resold to them as services, and the services produce the data the next version will need in order to exist.

What makes the situation unprecedented is that this particular resource is not an object in the world. It is that through which we relate to the world. To capture intelligence is not merely to appropriate a raw material; it is to short-circuit the fundamental gesture by which a human subject experiences thinking, judging, and deciding. And it is precisely this short-circuit that constitutes the philosophical novelty of the moment.

What Becomes of the Subject When the Machine Thinks Before It

When Remy is presented as a 24/7 personal agent designed to transform Gemini into an assistant capable of acting on the user's behalf, and Google employees are already testing it, what looms exceeds the category of functionality. The agent can access conversations, connected applications, personal context, and location, and can integrate with Gmail, Calendar, Docs, Drive, Keep, Tasks, GitHub, WhatsApp, Spotify, and Google Photos. This does not sketch an assistant. It sketches the technical possibility of a total representation of an individual's life — more complete and more coherent than anything the individual can form for themselves.

The human being, ever since constituting itself as a philosophical subject, has defined itself by the capacity to ask what it ought to do and what is right. This questioning presupposes suspended time, deliberation, a withdrawal from urgency. The sciences alone cannot answer the fundamental questions: "What must we do?" and "What is just?" Without philosophy, the human sciences become empty tools. What becomes of this questioning when an agent responds before you have thought to ask, when the answer precedes the question, when the practical organization of the day is entrusted to an entity that learns from you better than you learn from yourself?

When the knowledge held of me becomes superior to the knowledge I hold of myself, the center of gravity shifts. My actions cease to be the expression of an inner deliberation; they become the arguments of a function whose parameters are foreign to me. The existential condition, insofar as it presupposes a minimal opacity of the subject to itself and an unfinished work of self-elucidation, is short-circuited. The subject does not disappear. It is dispossessed of the process by which it was constituted.

Several contemporary voices point to this vertigo. Whether their stated aims are celebrated as emancipatory or regulatory, they rest above all on a reductive and devitalizing model of the individual. The processes of alienation and disindividuation already at work with the advent of the digital accelerate. The right word is disindividuation. What is at stake is not a classical subjugation with its visible chains, but a slow dissolution of personal contours, a growing porosity between what I want and what the machine predicts I will want. The promise of augmentation, when it takes the form of delegation to an external agent, turns against itself: the cyborg is not an augmented human but a diminished living being. One does not augment oneself by entrusting to another the care of thinking for oneself. One learns to do without that thinking, and this learning is irreversible — as irreversible as the forgetting of languages called "dead" that one no longer practices but which nonetheless underpin the real and deep meaning of words and language as a whole.

Capture by Default

What makes the situation philosophically singular is that it proceeds from no choice. No referendum, no public deliberation, no questioning or informed consent was sought. The default configuration has become the major political site of our era. What is decided there, offstage, structures behavior at planetary scale.

Consider the strategic sequence being written. Google has confirmed it will merge its ChromeOS and Android systems, with the mobile OS emerging triumphant. Sameer Samat made it official at Qualcomm. Android will be the winner, and users will see the results in 2026.

Reading between the lines of this decision, Google can deploy its Gemini AI services across more devices. This is not a trivial technical choice. It is the unification of the software fabric around a proprietary AI core, designed to install itself across every surface of daily life, from pocket to desk.

Whereas ChromeOS was built on Chromium with web applications as the primary paradigm, Aluminium OS is now built on Android with full desktop capabilities from day one, runs all Play Store applications natively, and Gemini AI is integrated into the core of the operating system and processed locally via NPU — the boundary between tool and subjectivity becomes an administrative fiction. The weights.bin file is merely an advance guard. It prepares a terrain where the autonomous agent will no longer be an option, but the normal mode of use. This is the clearest sign that Google wants Gemini to become the operating system of daily life. Chatbots that answer questions are no longer sufficient. The next step is an AI that actually does things for you without requiring constant instructions. The expression "operating system of daily life" deserves to be paused over. It states, plainly, the horizon: that algorithmic mediation should cease to be an option and become the ambient air.

The Structural Lag of Regulators

At this point, one would expect a political response. It exists on paper. Companies around the world are now awaiting the legal ruling to be delivered around August 2, 2026. That date triggers the main obligations of the EU's AI Act, fundamentally redefining European artificial intelligence markets. In reality, the picture blurs. The European Commission published the Digital Omnibus on AI on November 19, 2025, proposing to delay the high-risk compliance deadline from August 2, 2026 to December 2, 2027. The second political trilogue on April 28, 2026 ended without agreement. What is at stake there exceeds administrative chronology. This proposal is set against the backdrop of a delay in the preparation of standards to support the application of high-risk requirements and the establishment of competent authorities in member states. This jeopardizes a smooth entry into application on August 2, 2026, while simultaneously paralyzing the initiatives of secondary AI market players who may see their strategies and investments rendered worthless in an instant.

A regulation that arrives too late does not regulate: it ratifies. It endorses what the industry has installed during the length of the proceedings. The structural gap between the speed of industrial deployment and the slowness of the regulatory apparatus is not an accident: it is the very condition of the model.

This is a deep sociological trait. The institutions that claim to frame technology are themselves structured by positions, trajectories, and dispositions that do not give them access to the real object. Norms, frameworks, and plans are produced there, but the practical relationship with code, infrastructure, and laboratory culture is absent. Industrial actors possess technical, scientific, and financial capital, along with relational capital that enables them, through a thousand channels, to orient the direction of decisions. The rule is constructed in a game whose rules are themselves defined by one of the players.

Hence the broader observation that can be made: the current conjuncture fuels pessimism in the face of a future obscured by the joint degradation of conditions of existence and the fascist radicalization of digital capitalism. More than ever, the question of labor and the deleterious impact of technologies on its organization crystallizes debates. Regulation — necessary, belated, incomplete — will not suspend the dynamic of seizure, because this dynamic operates elsewhere: in default configurations, in the files that publishers authorize themselves to write in others' territory, in users' ignorance of the actual capabilities and actions of their everyday tools, in the silent bridges that pre-install tomorrow's capabilities today.

The Same Grammar for All

Let us return to Anthropic, because this case concentrates the entire problematic at hand. Alexander Hanff (an SSI consultant who discovered the Anthropic spyware) argues that the behavior constitutes a violation of Article 5(3) of the EU's ePrivacy Directive, which requires explicit consent before storing or accessing information on a user's device, except where strictly necessary for the service requested. He sent Anthropic a formal notice demanding opt-in changes within 72 hours.

The honest description of what sits on the machine is: pre-installed spyware capability, silently deposited, dormant, awaiting activation. The moment an associated extension arrives — whether the user installs it, a corporate policy pushes it, an attacker plants it, or Anthropic's next update bundles it — the word dormant disappears. What this case documents is that there is, within the contemporary AI industry, no decisive moral difference between actors. The advertising titan and the ethical challenger resemble one another precisely where one expected them to differ: in the concrete manner of writing into our systems. One claims experience optimization; the other claims safety. Both practice the same radical asymmetry between their operational authority and the ignorance in which they keep those whose machines run their code.

This is where one must resist the individualizing moral temptation to blame any particular engineering team. The problem is not that Anthropic or Google are staffed by bad engineers. The problem is that the structural position of these actors, within an economy where capture is the mode of production, mechanically produces this behavior. As long as reasoning proceeds company by company, one is treating the surface foam. The current runs deeper.

The Narrow Window

It would be easy, at this point, to slide into despairing lucidity. That would be a mistake. The present situation also contains — and this may be the most surprising aspect — an unprecedented historical opportunity.

For the first time, a significant portion of humanity has access, at near-zero marginal cost, to cognitive capabilities that were yesterday the preserve of small, properly trained elites. A teenager in an isolated village can conduct a university-level technical dialogue with a model. A farmer can have a crop disease diagnosed by photograph. A primary school teacher can generate adapted pedagogical material in minutes. The promise, in its pure potential, is enormous. The problem is that this capability is captured by a handful of private operators who, by monetizing access and organizing dependency, transform a promise of emancipation into a mechanism of alienation. But the promise itself remains open.

Seizing this opportunity requires defending three demands simultaneously:

The first: to defend relentlessly the ecosystem of open source, open models, publicly available weights, interoperable protocols, and decentralized architectures. As long as there exist in the world models that can be audited, modified, and run locally without accountability to any party, a margin of maneuver remains. The fight for open source is not a fight for nostalgic technicians. It is the major political fight of this decade.

The second: to reinvest in education from a perspective that is not one of adaptation to technology, but one of forming subjects capable of critical use. An education that teaches how these machines work, how to recognize their biases, how to identify their hallucinations, how to use them as amplifiers of a thought one conducts oneself rather than as substitutes for that thought. It is demanding, it is slow, it is less spectacular than a new European regulation, but it is the only durable guarantee.

The third: to think politically, and no longer merely technically, about infrastructures. Data centers, cables, semiconductor factories, water, energy — all of this composes a material geography from which politics is today largely absent. At the scale of Chrome, the climate bill of a single model push, paid in atmospheric CO2 by the entire planet, lies between six thousand and sixty thousand tonnes of CO2-equivalent emissions. This is the environmental cost of a company unilaterally deciding that the default browser of two billion people will massively distribute a 4 GB binary they did not request. These figures are not communications spin. They describe a cost transfer, from the company pushing the model to the community breathing its carbon.

Taking Back Control

There are, in the history of technology, moments when a window opens briefly, and the way society seizes it fixes the balance of power for decades. We are in one of those moments. The form AI will have taken in ten years is not written. It depends on choices being made now, partly before our eyes, partly without our knowledge.

The issue, therefore, is less to predict than to decide. What is at stake with Remy, with the silent download of the Gemini Nano model, with the silent bridge of Claude Desktop, with the merger of Android and ChromeOS, is not the end of a story but the beginning of a narrative whose outcome remains undecided. Refusing techno-determinist fatalism does not mean denying the power of the devices being installed. It means recognizing that these devices are not forces of nature. They are products of human decisions, made in identifiable offices, by nameable individuals, under precise economic constraints. What has been done can, to varying degrees, be undone, slowed, amended, contested, and competed with.

For that, however, one must first become capable again of something simple and difficult: thinking for ourselves, at a time when a machine claims to think in our place. This is an old demand, already formulated at the dawn of the Enlightenment, that today takes on new weight. Dare to know, dare to understand, dare to refuse, dare to configure differently, dare to switch off, dare to slow down. These minuscule gestures will not overturn the order of the world. But they keep open the possibility that the order of the world is not written entirely by others.

A 4 GB file silently deposited by Google. An extension bridge pre-installed without consent by Anthropic. A personal agent learning to anticipate our desires. An operating system closing in on itself. Such is the concrete backdrop of our era. It remains to decide who, in this backdrop, will be the subject, and who will be the object. That decision is not technical. It is, in the most demanding sense of the word, philosophical. And it concerns us all.

Propulsé par Algolia

About this tool

199A Cms, V0.1 - Lightweight - NoDB - AI enabled - Multilingual & SEO by design.