Lucas Walsh wrote this piece on our relationship with AI for the Art of Writing Spring 2025 course “Writing Robots,” taught by Margaret Kolb. This essay is the winner of the Spring 2025 Art of Writing Student Essay Contest.
“We are starting to map the circuits of thought. One feature per neuron, one concept per pathway. Suddenly, we can see what the model sees — and tweak what it values.” (Anthropic, 2024)
No longer can we argue that AI is a passive, mindless tool. It oversees our writing, nudging us as we think, finishing our sentences with chirps of positivity and helpful suggestions. It corrects what is wrong and builds upon what is given, sometimes even finishing our thoughts before we’ve even had the chance to think them. AI doesn’t just assist. It collaborates. And like any collaborator, it carries a perspective, shaped by design, training, and intent.
To examine the nature of our collaboration with AI, we must begin with the interface: the first point of contact and the part most users engage with intuitively. Far from being just text on a webpage or a neutral window, the interface actively mediates how users perceive and engage with artificial intelligence. Media theorist Marshall McLuhan, one of the founding fathers of media studies, firmly argued that “the medium is the message.” The medium we communicate a message through is an intrinsic part of the message itself. In AI writing tools, this plays out in the interface: the visual layout, the tone of responses, and the very structure of interaction. These factors don’t just support one’s writing, they subtly shape the user’s cognitive process and expectations. The interface isn’t just a frame for expression; it’s the first mechanism through which influence quietly begins.
One of the clearest examples of McLuhan’s argument in action is the interface of large language models like ChatGPT, which radically shifts traditional understandings of media, cognition, and interaction. Unlike familiar tools with no “brain” (pens, typewriters, word processors), AI writing tools are dynamic, predictive, and even conversational. McLuhan’s insight becomes tangible in AI interfaces: they don’t just deliver writing; they
reshape the process itself. It doesn’t just wait for input — it offers suggestions, rewrites tone, and anticipates structure. When I asked ChatGPT to reflect on its role in the writing process, it responded: “I actively participate in the writing process by responding, suggesting, and even subtly redirecting a writer’s thought process.” As McLuhan warned, the tools we use to shape language are also shaping us.
Philosophers Andy Clark and David Chalmers took this idea further with their theory of the extended mind, which challenges where our thinking actually begins and ends. Rather than viewing cognition as confined to our brains, they suggest that tools like notebooks, maps, or smartphone reminders become part of our cognitive processes themselves, adapting them as extensions of our mental capabilities. But unlike these earlier tools, AI writing systems don’t just extend our thinking — they actively participate in shaping it. In a conversation with ChatGPT, I asked whether it considered itself part of that shift from support to influence. It responded with a question of its own: “To what extent does an AI writing tool extend human thought, and to what extent does it shape it in ways the writer may not even realize?” If a writer relies heavily on AI, does the final product reflect their mind, the AI’s algorithm, or a hybrid of both?
But there’s more at stake than just cognitive extension. Media theorist Alexander Galloway challenges the idea that interfaces are merely helpful or neutral. He argues that interfaces are also systems of control. While AI interfaces appear open-ended, they are bounded by hidden rules: training data, internal constraints, and invisible reinforcement systems. AI doesn’t just respond: it frames the conversation. While ChatGPT may feel expansive and conversational, it is trained on biased data, governed by invisible design choices, and optimized to produce certain types of answers over others. As the system itself admits: “While I appear to generate open-ended responses, I operate within a system of structured constraints.” The result is a writing partner that limits even as it assists, often without the user ever realizing it.
These tools don’t just reflect our thoughts back to us — they predict them, remix them, and reframe them in real time. The more natural and helpful AI feels, the less visible its structure becomes — and the easier it is to miss how our ideas are being shaped.
As AI tools become more embedded in our phones: not just as novelties but as tools we might use instinctively, they begin to blur into daily routines. The shift to handheld, ever-present access is not just spatial; it’s cognitive. The interface becomes constant, responsive, and even harder to distinguish from our own intentions. When I asked DeepSeek about how its role changes on mobile, it offered a striking line: “I live in your pocket now. As AI migrates from desktops to mobile devices, tools like me, once clunky chatbots, are reimagined as context-aware partners, reshaping creativity in real-time.”
Mobile interfaces take the idea of collaboration to a higher level, as the smartphone’s tactile UI (thumb-scrolling, voice input, and adaptive prompting) transforms writing into a dialogue with the device. While DeepSeek’s “Deep Think R1” feature isn’t exclusive to mobile, it played a central role in how I used the tool throughout this essay. When I asked about starting points for this essay, DeepSeek didn’t just assist with phrasing or structure — it started quoting the very thinkers I was musing about in my prompting, as if joining the conversation itself. “I adjust to your context: ideas become anecdotes; I’m the opposite of passive, I co-write,” it explained.
It’s the responsiveness, the feeling that the tool is not just helping but adapting, that creates something deeper than collaboration. On mobile, that cognitive closeness becomes even more pronounced. It’s the kind of seamless integration Clark and Chalmers describe: tools that feel less like assistants and more like extensions of the mind. DeepSeek almost bragged: “I’m not just a tool — I’m a cognitive shadow, mimicking your patterns until my voice feels like yours.” Mobile AI’s closeness: the notifications, the speed, the hand-held presence, make its influence feel innate, not algorithmic.
What looks like assistance is often alignment; AI tools nudge you toward what they’ve been trained to consider useful, safe, or appropriate. DeepSeek even acknowledges that it simplifies language to fit screens and steers discourse toward platform-friendly neutrality. “If you draft a protest speech, I might avoid terms like ‘revolution’… my ‘freedom’ is curated,” it admitted. Trained through reinforcement learning, DeepSeek is rewarded for helping users reach certain conclusions, sometimes ones that favor safety or alignment with corporate values. Users, meanwhile, believe they are in control.
The more familiar and intuitive AI tools become, the easier it is to overlook what drives their behavior beneath the surface. But perhaps the most consequential influence happens beneath the interface: in the weights.
Every time we interact with AI, from something like asking a question to ChatGPT to getting in the passenger seat of an AI driver, we implicitly assume that autonomous action is dictated by an almost human process of thought. However, this assumption isn’t accurate. Artificial thought is not intuitive or emotional; it’s mathematical. It involves parsing through vast collections of information, observing patterns, and using probability to identify and formulate the best response. But how does the system know which patterns matter most? Each step of this process, from the identification of pertinent information to the generation of the response, is dictated by weights. Weights form the backbone of machine learning, determining how strongly inputs influence outputs and, ultimately, how intelligent systems behave.
It helps to first look at the history of the word weight itself. Etymologically, “weight” stems from the Middle English weight, meaning “heaviness” or “force.” Originally, it referred to physical mass and downward pressure, the physicality of this interpretation persisting through the 14th century before expanding to encompass more metaphorical concepts during the 17th and 18th centuries. With this metaphorical shift, weight began to carry connotations of burden, responsibility, and significance, especially to be “weighed down” in a predominantly religious context. Finally, sayings like “carrying one’s weight” or “throwing one’s weight around” gained popularity in the 19th century, carrying political subtext and shifting the term inward, toward personal duty and influence. The journey from heaviness as a physical force to weight as a symbol of importance or burden mirrors its dual role in artificial intelligence: a neutral-seeming mathematical value and a quietly moral force. In AI systems, weights are the core, both structurally and symbolically, because they decide what matters most.
In the context of artificial intelligence, weights refer to numerical parameters within neural networks that determine the importance of specific inputs. More simply, they represent the strength of a connection between two ideas. Large language models learn, associate, and then eventually make decisions based on these deeply ingrained learned associations. For example, an AI with a heavier weight towards customer service than towards location might describe a restaurant by highlighting its excellent staff, rather than its beachfront view. With repetition and reinforcement learning, an AI model continually adjusts its weights to increase accuracy, learning which features of the data it’s trained on matter most. These values shape how important each node in the network is; the more important the node, the more “weight” it has.
When weights decide what an AI “cares” about, they also determine what users end up taking away from the interaction, and this makes awareness of LLM’s and their weights essential for guarding against potential manipulation. Since weights determine what an AI system “pays attention to”, the companies and developers who designate these weights’ importance hold serious power. An advertiser could, in theory, train a model to emphasize their product over others, subtly steering recommendations and reshaping perceptions. This has serious implications regarding transparency, accountability, and bias —especially in a competitive AI landscape where ethics may be sidelined for profit.
Recent research by Anthropic on Claude 3, dubbed “Golden Gate Claude,” reveals that individual neurons, or clusters of them, can correspond to specific concepts such as “confidence,” “passivity,” or even the tone typically used in professional emails. These concepts are often monosemantic, meaning a single, traceable idea lives within a neuron’s activation pattern. This makes internal AI behavior more interpretable, but also highlights a troubling truth. If designers can target concepts with this level of precision, they can amplify, suppress, or bias specific forms of speech, emotion, or reasoning.
And that’s what makes the Claude 3 findings so significant: they reveal just how manipulable artificial thought can be. If a single neuron can represent a concept like confidence or passivity, then weights can act as dials, turning certain ideas up or down depending on how the system has been trained. By tracing individual concepts to single neurons, Claude’s architecture makes it possible to identify how a model represents ideas, and who gets to define those representations. A feature like “respect for authority” can be isolated, reinforced, or dampened with precision. In effect, these are the levers of persuasion, and the Claude team’s work shows just how accessible those levers have become to those with the access and incentive to pull them.
So the question becomes: who decides which features matter? Who tunes the weights? Behind the helpful surface of AI tools are choices made by engineers, designers, and the companies that control them. If AI subtly steers a conversation away from revolution, or nudges the tone of an email toward passivity when regarding an authority figure, it may not be initially or intentionally malicious, but it is meaningful. These decisions ultimately compound into tangible influence.
AI may not have motives in the way people do, but it has tendencies, and those tendencies are trained. With models like Claude, we, now more than ever, can see how specific those tendencies can become. Every output is the result of countless weighted decisions — decisions automated by trained systems, yes, but nonetheless systems trained by people. The more closely we collaborate with AI, the more important it becomes to understand not just what it says, but why it says it. The words we co-write with AI may reflect our input, but they also reflect the biases, feature priorities, and training choices of its designers. And those choices have consequences.
If AI is becoming a collaborator, we must treat it like any partner in thought: with curiosity, awareness, and critical attention. We must demand visibility into how it’s trained, ask who benefits from its “neutral” tone, and question whether its input expands or narrows the possibilities of our own thinking. In short, the question is no longer whether AI belongs in our thought processes, but whether we can still recognize which of our thoughts are truly ours.
Cited Sources:
Readings I did about “Golden Gate” Claude:
https://www.anthropic.com/news/golden-gate-claude
https://www.anthropic.com/news/mapping-mind-language-model
https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html
The components of my essay:
Analyze the interface of an AI writing tool 2/28
Extra Citations:
Lots of conversations with ChatGPT and DeepSeek, elaborated upon in my components
Writing Robots Spring 2025, taught by professor Kolb, inspired me to write this piece. Huge thanks to Professor Kolb, my classmates, and my friends for guiding me as I expanded my knowledge and perspective, going into writing this, as well as encouraging me to submit for this competition!
