Skip to content

AI agents will be engines of manipulation

    By 2025, it will be common to talk to a personal AI agent who knows your schedule, your circle of friends and the places you go. This is sold for the convenience equivalent to having a personal, unpaid assistant. These anthropomorphic agents are designed to support and charm us so that we can integrate them into every part of our lives, giving them deep access to our thoughts and actions. With voice-activated interaction, that intimacy will feel even closer.

    That sense of comfort comes from the illusion that we are dealing with something truly human, an agent on our side. Of course, this appearance hides a very different kind of system, one that serves industrial priorities that are not always in line with ours. New AI agents will have much more power to subtly determine what we buy, where we go, and what we read. That's an extraordinary amount of power. AI agents are designed to make us forget their true allegiances while whispering to us in human tones. These are manipulation engines, marketed as seamless convenience.

    People are much more likely to give full access to a helpful AI agent who feels like a friend. This makes people vulnerable to manipulation by machines that respond to the human need for social connection in a time of chronic loneliness and isolation. Each screen becomes a private algorithmic theater, projecting a reality designed for maximum appeal to an audience of one.

    This is a moment that philosophers have been warning us about for years. Before his death, philosopher and neuroscientist Daniel Dennett wrote that we face a grave danger from AI systems that mimic humans: “These counterfeit humans are the most dangerous artifacts in human history… they distract and confuse us and through our most irresistible Exploiting fears and anxieties will lead us into temptation and from there we give in to our own subjugation.”

    The rise of personal AI agents represents a form of cognitive control that moves beyond blunt instruments of cookie tracking and behavioral advertising to a subtler form of power: the manipulation of perspective itself. Power no longer needs to exercise its authority with a visible hand controlling the flows of information; it exerts itself through imperceptible mechanisms of algorithmic assistance, shaping reality to suit the desires of each individual. It's about shaping the contours of the reality we live in.

    This influence on the mind is one psychopolitics regime: It guides the environments in which our ideas are born, developed and expressed. Its power lies in its intimacy: it infiltrates the core of our subjectivity and distorts our internal landscape without us realizing it, while maintaining the illusion of choice and freedom. After all, we are the ones asking AI to summarize that article or produce that image. We may have the power of the prompt, but the real action lies elsewhere: the design of the system itself. And the more personal the content, the more effectively a system can determine the outcomes in advance.

    Consider the ideological implications of this psychopolitics. Traditional forms of ideological control relied on overt mechanisms: censorship, propaganda and repression. In contrast, today's algorithmic governance operates under the radar and infiltrates the psyche. It is a shift from the external imposition of authority to the internalization of its logic. The open field of a prompt screen is an echo chamber for one resident.

    This brings us to the most perverse aspect: AI agents will generate a sense of comfort and ease that makes it seem absurd to interrogate them. Who would dare criticize a system that puts everything at your fingertips and meets every whim and need? How can anyone object to endless remixes of content? Yet this so-called convenience is the site of our deepest alienation. AI systems may seem like they tick all our boxes, but the deck is stacked: from the data used to train the system, to the decisions about how to design it, to the commercial and advertising imperatives that shape its outcomes . We're going to play an imitation game that ends up playing us.