The Programmer Identity Crisis ❈ Simon Højberg ❈ Principal Frontend Engineer

🔥 Explore this insightful post from Hacker News 📖

📂 Category:

📌 Here’s what you’ll learn:

I am a programmer. A coder. A keyboard cowboy. A hacker. My day is spent
punching keys; catalyzing code. It’s fun; it’s my identity. The editor, Vim, is
my workshop, my sanctum1. Here, I hone my craft, sharpen my tools, expand my
capabilities through curiosity, and for a while, escape into a trance-like
flow. A full-screen terminal window with nothing between me and thought but
INSERT mode. At the altar of Bram2, I spin reality’s yarn out of thin air
into bits beaming through silicon. A completely imagined, non-tangible world
with IRL ramifications. A place in which I find comfort in craft and
creativity. Time disappears into puzzle-solving. Where connecting pieces
matters more than completing a picture. Craft springs from fingers to buffer. I
program and fade away into flow and composition.

In the late 1950s at MIT, a new and electrifying culture was emerging. Hands-on,
experimental, and anti-establishment. I like to imagine myself there, sitting
at the slate-blue L-shaped console. Typing away at the Flexowriter3 as it
spits out punched paper tape programs to be fed to the nearby wall of metal
uprights, tangled wire, and early transistors; the “Tixo”4. Waiting with
bated breath, as enthralling beeps emanate from the machinery while it runs the
program: will it succeed? I imagine the Hackers—as they came to be known—around
me, pointing at code and offering advice on how to achieve “The Right
Thing”5: the perfect program, pristine, elegant, and succinct. I can sense
the original culture of programming pouring out of them as they passionately
embody “The Hacker Ethic” while
sharing stubs of their own paper programs to guide me on my quest.

It was there—in the computing crucible of building 26—that the craft of coding
was cast. Nearly 70 years ago, members of the Tech Model Railroad Club immersed
themselves in the language of machines to pursue a mastery of digital wizardry.
The sublime magic of manipulating formal languages to solve increasingly
challenging cryptic conundrums and—core to the culture—sharing findings with
other students of the dark arts of software sorcery.

The ghosts of ancient Hackers past still roam the machines and—through the
culture they established—our minds. Their legacy of the forging of craft
lingers. A deep and kinetic craft we’ve extended and built a passionate
industry on. We are driven by the same wonder, sense of achievement, and
elegance of puzzle-solving as they were. Still driven by “The Right Thing.”
These constitutional ideas, the very identity of programmers, are increasingly
imperiled. Under threat. The future of programming, once so bright and
apparent, is now cloaked in foreboding darkness, grifts, and uncertainty.


In fact, if we are to trust the billion-dollar AI industry, the denizens of
Hacker News (and its overlords), and the LinkedIn legions of LLM lunatics, the
future of software development has little resemblance to programming.
Vibe-coding—what seemed like a meme a year ago—is becoming a mainstay.

Presently (though this changes constantly), the court of vibe fanatics would
have us write specifications in Markdown instead of code. Gone is the deep
engagement and the depth of craft we are so fluent in: time spent in the
corners of codebases, solving puzzles, and uncovering well-kept secrets.
Instead, we are to embrace scattered cognition and context switching between a
swarm of Agents that are doing our thinking for us. Creative puzzle-solving is
left to the machines, and we become mere operators disassociated from our
craft.


Some—more than I imagined—seem to welcome this change, this new identity:
“Specification Engineering.” Excited to be an operator and cosplaying as Steve
Jobs to “Play the Orchestra”. One could only wonder why they became a
programmer in the first place, given their seeming disinterest in coding. Did
they confuse Woz with Jobs?

I can’t imagine (though perhaps I’m not very imaginative) that Prompt, Context,
or Specification “Engineering”6 would lead to a bright and prosperous
profession for programmers. It reeks of a devaluation of craft, skill, and
labor. A new identity where our unique set of abstract thinking skills isn’t
really required; moving us into a realm already occupied by product managers
and designers.

Inside companies, power dynamics are shifting as this new identity is pushed.
In a mad dash to increase productivity in the wrong place7, developers are
forced to use LLMs in increasingly specific ways. Conform or be cast out. Use
the products that herald our obsolescence, or resign. Scarcely has management
mandated such specifics of our tools before. Tools, like those of a chef or
carpenter, that we’ve taken great pride in curating and honing ourselves:
careful configuration of our editor, tinkering with dot files, and dev
environments. As part of the craft, we’ve been dedicated and devoted to
personalizing our toolsets to match our thinking. It feels like a violation to
have this be decreed by management, who have little to no connection to the
day-to-day, and who should instead be concerned with outcomes, process, and
facilitating creativity. For decades, programmers have been pampered within
companies. These narratives offer a new way for management to tip the balance
back in their favor.


Some—with glee and anticipation—liken LLMs and their impact to the transition
from low-level to high-level languages: from Assembly to Fortran. This, I
think, is wrong in a couple of ways: firstly, because the leap we made with
Fortran was rooted in programming, Fortran didn’t try to eliminate a craft but
built on top of it. Fortran didn’t remove the precision and expressibility of
programmatic formalisms, but expanded them. Secondly, Fortran was always
successful in producing the right outcome given its input. None of these things
is true in the world of LLMs. I can practically hear the cultists cry out:
“You’re just using it wrong” as they move the goalposts to fit an ever-changing
narrative. But we can’t expect AI tools to have the same outcomes as
programming languages. They are designed based on a different set of rules and
parameters.

There aren’t enough swear words in the English language to adequately describe
how frustrating computers and programming can be, but we have at least always
been able to count on them for precision: to perform exactly as instructed
through programming. It is perhaps because of our reliance and trust in the
precision of computers that we seem so primed to believe chatbots when they
gaslight us into thinking they did what we asked of them8.

LLMs and the work with them are naturally imprecise. Both in the properties of
Large Language Models, and in the very manner we instruct them:
misinterpretable natural languages. Curious that we chose this approach to
computing, given how much we, programmers, cringe at non-determinism. We prefer
predictability, compositionality, idempotence, and integration tests that
aren’t flaky. LLM code represents the opposite of that: inconsistent chaos.

Dijkstra, in “On the foolishness of ‘natural language
programming’,”
wrote, rather poignantly: “We have to challenge the assumptions that natural
languages would simplify work.” And: “The virtue of formal texts is that their
manipulation, in order to be legitimate, need to satisfy only a few simple
rules; they are, when you come to think of it, an amazingly effective tool for
ruling out all sorts of nonsense that, when we use our native tongues, are
almost impossible to avoid.”


There’s a movement to distance AI-assisted development (Agents in the driver’s
seat) from vibe-coding by imposing rigor and bureaucracy, but it ignores the
fundamental nature of the beast. I find that I don’t read the code an LLM
generates for me as closely as I would if I had written it myself or had
reviewed it in a PR. There seems to be something innate to LLM coding that
makes my eyes glaze over. I gloss. Overwhelmed and bored. Blindly accepting
spiked pitfalls, provided CI passes and the program compiles. Not checking if
the tests are even set up to run, or if it pulled in a nonexistent library or
implemented a whole one itself. Of course, I pay the price later, when I fall
into my own trap and realize that hours of work were built on a broken bedrock.
Or maybe I don’t notice till someone calls me out in a pull request, a bug
report, or when I’m paged for an incident.

A review or synopsis of a book can never replace the experience of reading it
yourself: contemplating ideas for hours and 100s of pages as each sentence is
carefully consumed. In the same way, skimming summaries of completed AI tasks
robs us of forming a deep understanding of the domain, the problem, and the
possible solutions; it robs us of being connected to the codebase. Taking the
plunge into the abyss of one’s ignorance to reveal, learn, and understand a
topic and its implications is both gratifying and crucial to good software.
Ownership, agency, and deep, fulfilling work have been replaced with scattered
attention spent between tabs of Agents.

Joan Didion, the great American essayist, famously wrote: “I write entirely to
find out what I’m thinking, what I’m looking at, what I see and what it
means.” Peter Naur explores this
same concept in his work, “Programming as Theory
Building.” Naur’s “Theory” embodies
the understanding of a codebase. How it operates, its formalisms, and its
representations of the real world. A context and insight that is only gained
from immersion. Naur describes the “Theory” as the primary outcome of
programming, the actual product, as opposed to the software it resulted in.
Only with a well-developed “Theory” can one effectively apply extensions and
bug fixes to codebases. With the ambivalent glances at code that comes with
vibing, building such a theory is difficult. Naur would deem it impossible, I’m
sure.

Good design emerges from immersion. From steeping. From back-and-forth work in
the text buffer and, often, away from the keyboard. It’s impossible to hold a
whole codebase in our minds. We must dive into modules, classes, and functions
to sharpen our blurry mental models. Read and write code to extend our
cognition, regain familiarity,
and understanding of the problem domain.

Once a semblance of context has been conjured, and through a plentitude of poor
attempts, we can finally uncover the solution. The dissonance of bad design
must be felt: it’s only when we write repulsive and repetitive code that we
realize that there is a better, more succinct, elegant, compositional, and
reusable way. It causes pause. A step back to think about the problem deeply.
Start over. Rinse repeat. Diametrically, AI Agent work is frictionless; we
avoid alternative solutions and can’t know if what we accept is flawless,
mediocre, terrible, or even harmful. Quality is crafted by iteration—how else
might we imagine good designs if we never explore objectionable ones?


The cognitive debt of LLM-laden coding extends beyond disengagement of our
craft. We’ve all heard the stories. Hyped up, vibed up, slop-jockeys with
attention spans shorter than the framework-hopping JavaScript devs of the early
2010s, sling their sludge in pull requests and design docs, discouraging
collaboration and disrupting teams. Code reviewing coworkers are rapidly losing
their minds as they come to the crushing realization that they are now the
first layer of quality control instead of one of the last. Asked to review;
forced to pick apart. Calling out freshly added functions that are never
called, hallucinated library additions, and obvious runtime or compilation
errors. All while the author—who clearly only skimmed their “own” code—is
taking no responsibility, going “whoopsie, Claude wrote that. Silly AI, ha-ha.”

Meddling managers and penny-pinching execs are pushing (hopefully unknowingly)
for fewer human interactions on teams. Isolated and bereft of connection, we
are now empowered and encouraged to build walls around our work experience.
Reaching for LLMs rather than people when we need a pair programmer, someone to
ping pong solutions with, prototype, sketch architectures with, or help answer
expert questions about esoteric parts of the codebase. We no longer require
onboarding buddies, mentors, or peers; instead, we can talk to machines. With
LLMs, avoiding human contact is so easy that it might just become the norm. The
future really is bright…


It’s disturbing how agreeable we are to the AI hype narrative and actively
participate in the planned erasure of our craft9, and so willingly offer up
our means of thinking. We were the lucky ones who got to earn a living from our
hobbies. Even if we produce punctilious and rigid processes to counter slop—as
some support with a striking similarity to the waterfall model of yore—we’ve
still outsourced the fun part of the job and replaced it with directorial
drudgery. What’s next, TPS reports?

LLMs seem like a nuke-it-from-orbit solution to the complexities of software.
Rather than addressing the actual problems, we reached for something far more
complex and nebulous to cure the symptoms. I don’t really mind replacing sed
with Claude or asking it for answers about a library or framework that, after
hours of hunting through docs, I still seek clarity on10. But I profoundly
do not want to be merely an operator or code reviewer: taking a backseat to the
fun and interesting work. I want to drive, immerse myself in craft, play in
the orchestra, and solve complex puzzles. I want to remain a programmer, a
craftsperson.

I prefer my tools to help me with repetitive tasks (and there are many of those
in programming), understanding codebases, and authoring correct programs. I
take offense at products that are designed to think for me. To remove the
agency of my own understanding of the software I produce, and to cut
connections with my coworkers. Even if LLMs lived up to the hype, we would
still stand to lose all of that and our craft. Humans matter more than machines
and their backing corporations, who are profiting while the rest of us chase
the new American Dream they sell. As payment, we offer our critical thinking
skills, our fun, our craft, our privacy, and perhaps, our planet.

💬 What do you think?

#️⃣ #Programmer #Identity #Crisis #Simon #Højberg #Principal #Frontend #Engineer

🕒 Posted on 1761078238

By

Leave a Reply

Your email address will not be published. Required fields are marked *