Where Do We Go From Here?

💥 Discover this must-read post from Hacker News 📖

📂 **Category**:

💡 **What You’ll Learn**:

Table of Contents

This is a long article, so I’ve broken it up into a series of posts, listed below. You can also read the full work as a PDF or EPUB.

Previously: New Jobs.

Some readers are undoubtedly upset that I have not devoted more space to the
wonders of machine learning—how amazing LLMs are at code generation, how
incredible it is that Suno can turn hummed melodies into polished songs. But
this is not an article about how fast or convenient it is to drive a car. We
all know cars are fast. I am trying to ask what will happen to the shape of
cities
.

The personal automobile reshaped
streets,
all but extinguished urban horses and their
waste,
supplanted local
transit
and interurban railways, germinated new building
typologies,
decentralized
cities,
created exurban
sprawl,
reduced incidental social
contact,
gave rise to the Interstate Highway
System (bulldozing
Black
communities
in the process), gave everyone lead
poisoning, and became a leading
cause of death
among young people. Many parts of the US are highly
car-dependent, even though a
third of us don’t
drive.
As a driver, cyclist, transit rider, and pedestrian, I think about this legacy
every day: how so much of our lives is shaped by the technology of personal
automobiles, and the specific way the US uses them.

I want you to think about “AI” in this sense.

Some of our possible futures are grim, but manageable. Others are downright
terrifying, in which large numbers of people lose their homes, health, or
lives. I don’t have a strong sense of what will happen, but the space of
possible futures feels much broader in 2026 than it did in 2022, and most of
those futures feel bad.

Much of the bullshit future is already here, and I am profoundly tired of it.
There is slop in my search results, at the gym, at the doctor’s office.
Customer service, contractors, and engineers use LLMs to blindly lie to me. The
electric company has hiked our rates and says data centers are to blame. LLM
scrapers take down the web sites I run and make it harder to access the
services I rely on. I watch synthetic videos of suffering animals and stare at
generated web pages which lie about police brutality. There is LLM spam in my
inbox and synthetic CSAM on my moderation dashboard. I watch people outsource
their work, food, travel, art, even relationships to ChatGPT. I read chatbots
lining the delusional warrens of mental health crises.

I am asked to analyze vaporware and to disprove nonsensical claims. I
wade through voluminous LLM-generated pull requests. Prospective clients ask
Claude to do the work they might have hired me for. Thankfully Claude’s code is
bad, but that could change, and that scares me. I worry about losing my home. I
could retrain, but my core skills—reading, thinking, and writing—are
squarely in the blast radius of large language models. I imagine going to
school to become an architect, just to watch ML eat that field too.

It is deeply alienating to see so many of my peers wildly enthusiastic about
ML’s potential applications, and using it personally. Governments and industry
seem all-in on “AI”, and I worry that by doing so, we’re hastening the arrival
of unpredictable but potentially devastating consequences—personal, cultural,
economic, and humanitarian.

I’ve thought about this a lot over the last few years, and I think the best
response is to stop.
ML assistance reduces our performance and
persistence, and denies us both the
muscle memory and deep theory-building that comes with working through a task
by hand: the cultivation of what James C. Scott would
call
metis. I have never used an LLM for my writing, software, or personal life,
because I care about my ability to write well, reason deeply, and stay grounded
in the world. If I ever adopt ML tools in more than an exploratory capacity, I
will need to take great care. I also try to minimize what I consume from LLMs.
I read cookbooks written by human beings, I trawl through university websites
to identify wildlife, and I talk through my problems with friends.

I think you should do the same.

Refuse to insult your readers: think your own thoughts and write your own
words. Call out
people
who send you slop. Flag ML hazards at work and with friends. Stop paying for
ChatGPT at home, and convince your company not to sign a deal for Gemini. Form
or join a labor union, and push back against management demands that you adopt
Copilot—after
all, it’s for entertainment purposes
only.
Call your members of Congress and demand aggressive
regulation which holds ML companies responsible for their
carbon
and
digital
emissions. Advocate against tax breaks for ML
datacenters.
If you work at Anthropic, xAI, etc., you should think seriously about your
role in making the
future.
To be frank, I think you should quit your
job.

I don’t think this will stop ML from advancing altogether: there are still
lots of people who want to make it happen. It will, however, slow them down,
and this is good. Today’s models are already very capable. It will take time
for the effects of the existing technology to be fully felt, and for culture,
industry, and government to adapt. Each day we delay the advancement of ML
models buys time to learn how to manage technical debt and errors introduced in
legal filings. Another day to prepare for ML-generated CSAM, sophisticated
fraud, obscure software vulnerabilities, and AI Barbie. Another day for workers
to find new jobs.

Staving off ML will also assuage your conscience over the coming decades. As
someone who once quit an otherwise good job on ethical grounds, I feel good
about that decision. I think you will too.

And if I’m wrong, we can always build it later.

Despite feeling a bitter distaste for this generation of ML systems and the
people who brought them into existence, they do seem useful. I want to use
them. I probably will at some point.

For example, I’ve got these color-changing lights. They speak a protocol I’ve
never heard of, and I have no idea where to even begin. I could spend a month
digging through manuals and working it out from scratch—or I could ask an LLM
to write a client library for me. The security consequences are minimal, it’s a
constrained use case that I can verify by hand, and I wouldn’t be pushing tech
debt on anyone else. I still write plenty of code, and I could stop any time.
What would be the harm?

Right?

… Right?


Many friends contributed discussion, reading material, and feedback on this
article. My heartfelt thanks to Peter Alvaro, Kevin Amidon, André Arko, Taber
Bain, Silvia Botros, Daniel Espeset, Julia Evans, Brad Greenlee, Coda Hale,
Marc Hedlund, Sarah Huffman, Dan Mess, Nelson Minar, Alex Rasmussen, Harper
Reed, Daliah Saper, Peter Seibel, Rhys Seiffe, and James Turnbull.

This piece, like most all my words and software, was written by hand—mainly
in Vim. I composed a Markdown outline in a mix of headers, bullet points, and
prose, then reorganized it in a few passes. With the structure laid out, I
rewrote the outline as prose, typeset with Pandoc. I went back to make
substantial edits as I wrote, then made two full edit passes on typeset PDFs.
For the first I used an iPad and stylus, for the second, the traditional
pen and paper, read aloud.

I circulated the resulting draft among friends for their feedback before
publication. Incisive ideas and delightful turns of phrase may be attributed to
them; any errors or objectionable viewpoints are, of course, mine alone.

⚡ **What’s your take?**
Share your thoughts in the comments below!

#️⃣ ****

🕒 **Posted on**: 1776348570

🌟 **Want more?** Click here for more info! 🌟

By

Leave a Reply

Your email address will not be published. Required fields are marked *