From Likes to Language Models
The Culture Industry Revisited in the Age of Algorithmic Civilization
We’re confronted daily with smartphones that track our every move, social media platforms that monetize every click, and, increasingly, powerful artificial intelligence systems that can simulate, sometimes uncannily, human thought and creativity. The pace of these transformations can feel exhilarating yet also disorienting. It raises questions about how our cultural values, collective aspirations, and individual identities are shaped by forces we neither fully understand nor control.
It’s tempting to dismiss these concerns by celebrating the conveniences technology provides. We’ve never been more connected, but as Theodor Adorno warned in his critique of the “culture industry”, this kind of immersion can become a subtle form of domination. The very structures meant to empower us can end up constraining how we experience ourselves and each other. If the first wave of digital life, email, instant messaging, and the “web”, was about convenience, the latest waves of social media and generative AI have saturated our present reality, encroaching upon our capacity to reflect and, in some cases, to think critically at all.
Social Media and the “Culture Industry”
Social media represents, in many ways, the culmination of Adorno’s vision of a “culture industry” that seamlessly weaves commodification into every aspect of human life. Ostensibly, these platforms offer connection, community, and self-expression. Yet beneath the interface of “likes”, “shares”, and “tweets” (or whatever X calls them now) lies a complex algorithmic apparatus designed to convert our interactions into profitable data. Our tastes, our friendships, even our emotions become raw material to be quantified and monetized. The “like” button, for instance, may feel innocuous, just a small gesture of approval, but each click folds us further into an algorithmic process that tracks, analyzes, and packages our behaviors for corporate gain.
This mechanism reflects a phenomenon Adorno described as pseudo-individualization (the illusion of choice). We believe we’re making personal, original choices by curating our profiles and selecting which content to consume or produce. In reality, these choices emerge from a restricted menu, predicated on patterns that have been optimized to maximize engagement. The notion of an “authentic self” becomes blurred. We embrace preset templates for humor, aesthetics, and political opinion, formats that lend the illusion of uniqueness but are, in fact, mass-manufactured forms of communication. Under the culture industry’s logic, everyone is reduced to being both a commodity and consumer.
Adding to this is the strangely elastic sense of time that social media imposes. Our feeds refresh constantly, creating what might be called the “permanent now”, an unrelenting stream of stimuli that discourages true pause or introspection. The writer Thomas Pynchon memorably captured this feeling with the concept of “temporal bandwidth” in his novel Gravity’s Rainbow. He suggests that the narrower our sense of past and future, the thinner our present becomes. Social media’s endless feed narrows our temporal bandwidth to a razor-thin sliver of immediate updates, making sustained reflection or complex thought increasingly difficult. As we scroll, we remain caught in a hypnotic loop of quick hits, outrage, amusement, sadness, jealousy, each ephemeral and promptly supplanted by something new.
Generative AI and the “Administered World”
While social media commodifies our relationships and subjective experiences, artificial intelligence intensifies what Adorno called the “administered world”, a realm in which human actions, thoughts, and emotions are all subject to systematic oversight and regulation. In its current phase, AI often operates as a force that “personalizes” our environment, recommending movies, curating our news feeds, suggesting the next item in our online shopping cart. But as these systems become more sophisticated, they move from guiding our choices to shaping them, potentially constraining the horizons of our imaginations.
Consider the advent of large language models like OpenAI’s ChatGPT or Anthropic’s Claude. Such systems promise an enticing convenience: Why labor over writing an email, drafting a presentation, or even crafting a poem when an AI can do it for you, almost instantaneously? Initially, this feels liberating, we save time, avoid tedious tasks, and leave rote labor to the machines. But overreliance on AI-produced answers poses a deeper threat. By outsourcing mental effort to generative AI, we risk diminishing our own capacities for reflection, creativity, and critique. We embrace a future in which each of us becomes less responsible for our thoughts, ideas, and modes of expression, because a machine is always ready to do it on our behalf.
Such large-scale AI systems don’t develop in a vacuum. They are shaped by the data sets on which they’re trained, data sets that reflect corporate interests, and the worldviews of those who develop and fund these technologies. When we consult an AI for answers, we’re effectively consulting a collection of historical patterns, corporate-imposed filters, and biased editorial decisions. The risk is that the already pervasive logic of social media, guided by engagement metrics and profit motives, will be enhanced. AI will further homogenize thought, reinforcing existing power structures, beliefs, and ideologies while presenting this reinforcement as “objective,” “efficient,” or simply “smart.”
The Stakes: Who Controls Creativity and Thought?
A logical outgrowth of this AI-powered world is the possibility of controlling what people think, believe, and produce. If future societies rely almost entirely on AI for creative work, books, art, music, even political programs, then whoever designs and trains these models holds influence over culture and consciousness. This concern goes beyond mere paranoia, it is an extension of the power that social media companies already wield, shaping public discourse through their algorithms. If corporate or political interests continue to control the foundational models that churn out our daily “answers,” we may one day find that our sense of possibility, of what we can believe, imagine, or create, has silently contracted.
This is why open-source AI models matter. They offer a counterbalance to the centralized control and hidden agendas baked into proprietary systems. Open-source projects can be studied, critiqued, and adapted by a global community, making their processes more transparent and less vulnerable to manipulative designs. If we want a future where AI genuinely expands human potential rather than curtails it, it’s crucial that these tools are not exclusively beholden to government or corporate directives. As Oscar Wilde argued in The Soul of Man Under Socialism, technology might liberate us from drudgery, granting time and space to nurture our creative and intellectual pursuits, if it’s developed in a spirit of collective benefit rather than individual profit.
Two Futures: Dystopia and the Liberating Horizon
Let’s imagine two diverging roads for our civilization.
- The Dystopian Path: In one scenario, we become wholly dependent on AI, trusting it to make decisions at every turn, from personal banking to legal judgments to the selection of our political representatives. Human capacities for critical thinking, debate, and innovation atrophy. Society is administered from the top down, not by visible authority figures, but by opaque AI systems that gather data on every aspect of our lives. In this world, we slip into the role of helpless children, reliant on AI to solve dilemmas we can no longer unravel ourselves. Creativity and opinion-making become cogs in a corporate or governmental machine, curated by code that only a handful of technocrats understand. A “democratization” of technology degenerates into a near-total subjugation of human will, disguised by the promise of frictionless convenience.
- The Liberating Horizon: In another scenario, society acknowledges the pitfalls of automation and carefully weaves AI into our lives in ways that augment, rather than supplant, human faculties. We use generative AI to handle mundane tasks, freeing us to focus on what truly demands our empathy, curiosity, and imagination. Transparency and regulation ensure that these models remain tools of creative collaboration, not gatekeepers of thought. Education systems evolve to emphasize critical thinking, reasoning, and the craft of human expression, making us less prone to blindly accept whatever our devices deliver. Open-source models, liberated from the grasp of narrow corporate or political agendas, allow communities to craft AI systems tailored to their values and needs. Rather than relegating us to passive recipients, technology serves as an instrument that extends our range of creative and civic engagements.
Toward a Thicker “Now”
The future of civilization hinges on how we navigate our relationship with powerful new tools. Will these technologies broaden our “temporal bandwidth,” as Pynchon put it, allowing us to draw more richly from past insights and future dreams? Or will we remain fixated on a perpetual present, clicking “like” and accepting machine-generated answers without reflection?
Paradoxically, the same technology that narrows our temporal horizon also offers the potential for transcending it. AI could free us from mindless labor, enabling deeper engagement with art, philosophy, and community. Social media could become a space for genuine dialogue rather than a high-speed auction of our attention. But this redirection requires intentional action, scrutiny of who designs these platforms, what their incentives are, and how they influence the flow of information.
If we can preserve the sacredness of our attention, as well as the integrity of our personal conscience, then even the most advanced algorithms might become vehicles of genuine empowerment. But if we remain content to let machines serve up our opinions, amusements, and beliefs, if we trade the long arc of human creativity for the immediate gratification of the “permanent now”, we risk drifting further into a “totally administered world.” In such a place, our illusions of freedom will be orchestrated by code, and our capacity for true self-awareness will become as tenuous as a tweet.
The stakes are both personal and collective. Technology shapes not only how we spend our time but also how we conceive of who we are and what kind of world we want to inhabit. It’s worth remembering that the power to define technology’s role in society still, at least partially, lies in human hands. The question is whether we’ll use that power to create a future of genuine flourishing or let convenience lull us into submission. We can decide whether this technology will be harnessed for thoughtless consumption or for the deep, deliberate cultivation of human potential. A civilization that too eagerly welcomes machine-made answers risks forgetting how to pose truly meaningful questions.