The thing about vibe coding is that the vibes never happen in a vacuum. As Kamala Harris once said, “I don’t know what’s wrong with you young people. You think you just fell out of a coconut tree? You exist in the context of all in which you live and what came before you.” The tools we use don’t exist in isolation. They sit inside politics, economics, energy systems, and whatever technological revolution we’re currently living through. Right now, vibe coding with LLMs like Claude Code comes with a number of moral dilemmas.
Politics
AI is already tangled up in geopolitics, and not just in the vague “everything is political” sense. Recently the U.S. Department of Defense clashed with Anthropic over how its Claude models could be used. Anthropic had built guardrails blocking uses like autonomous weapons and mass domestic surveillance, while the Pentagon argued those restrictions should not apply in national security contexts. The dispute raised broader concerns about how much control AI companies actually retain once their technology enters government systems.
When I first read about that, I had a brief moment of panic. Tools like Claude are not just research assistants or coding helpers. They sit inside a much larger ecosystem of governments, defence contracts, and national strategy. At some point you may have to decide whether you’re comfortable relying on something that also lives in that world.
Environmental Impact
This one lands a bit close to home, given that my PhD sits in this space. Large language models run in data centres that require enormous amounts of electricity, and those centres often use water for cooling. The exact footprint of a single prompt is hard to pin down because it depends on the model, the hardware, and the location of the data centre.
But the broader point is not really disputed: as demand for AI grows, so does the infrastructure behind it. The environmental cost is simply harder for users to see. As an individual researcher I can try to be mindful about how often I rely on these tools, but I have almost no visibility into the energy systems or supply chains behind the responses I get.
Publishing and Accountability
When your name is on a paper, every line of code is yours to defend. AI complicates that. If something you didn’t fully write ends up in a published pipeline, how accountable are you really?
The same goes for writing. I believe the process of writing is indistinguishable from the process of thinking. If the prose is meaningless, it usually means the thinking is unfinished. In LLM-era academia, the temptation is to smooth the text before the ideas are ready, and you can end up with something that reads like a paper without containing a thought.
Privacy
Even setting aside official training policies, there have already been cases where chat data appeared where it shouldn’t. In March 2023, OpenAI temporarily took ChatGPT offline after a bug allowed some users to see titles from other users’ chat histories. It wasn’t a catastrophic leak, but it was enough to remind people that these systems are not completely private.
For researchers, that raises obvious questions. How much of our work ends up inside these systems, and what gets logged or stored? When you are working on unpublished ideas, datasets, or early drafts of papers, the boundary between “helpful tool” and “data leak” becomes hard to ignore.
Workload and Productivity
A few days ago I asked Claude Code to compare the logic in my LaTeX documentation with the code in my pipeline. It immediately spotted a fairly major mistake. I probably would have found it eventually, but maybe not before it had propagated through several notebooks and plots.
Moments like that highlight a tension I keep coming back to: shipping fast versus shipping with integrity. Do you move quickly, produce something messy but functional, and learn as you go? Or do you slow down and make sure you understand every line?
Building this website brought that question up again. The design decisions were mine, but I barely touched the underlying code and did almost no manual debugging. The site works, but my understanding of how it actually works is fairly shallow. The boundary between “automating the boring part” and “skipping the learning part” starts to blur.
Are We Losing a Skill?
Calculators didn’t kill mathematics, but they did change what being “good at maths” meant. Instead of becoming better programmers, are we just becoming better orchestrators of AI tools? That might not be a bad trade. New tools have always shifted what expertise looks like.
You could argue the same happened in art and engineering. Leonardo da Vinci didn’t grind every pigment himself forever. At some point the craft evolves and the focus moves elsewhere. Maybe learning how to guide these systems effectively is simply the modern version of that shift.
But the comparison only goes so far. Calculators sped up arithmetic, whereas AI doesn’t just accelerate the mechanical parts of the work; it starts to creep into the thinking itself. Outsourcing pieces of the thinking process feels very different from outsourcing the manual labour around it.
The Cost
The currency here is tokens, and the exchange rate is set by a very small number of companies. Running large language models requires enormous amounts of compute, which means only a handful of labs can realistically operate them.
Pricing is already structured in tiers, with higher plans offering greater access and fewer limits for heavier users. If demand keeps growing and the infrastructure costs remain this high, those same companies ultimately decide where the price settles.
Am I becoming dependent on something future me cannot afford?
Summary
I don’t have answers to most of these questions. The terrain is too new. I don’t believe using LLMs is inherently immoral, but I also don’t have a fully formed ethical framework for it yet. Then again, is there truly ethical consumption of anything under capitalism?
Anyway.
Let’s ask Chat.
Last modified: 15 Mar 2026
Leave a comment