On AI, anger, and the way from here

Jason Santa Maria, Large Language Muddle:

As someone who has spent their entire career and most of their life participating and creating online, this sucks. It feels like someone just harvested lumber from a forest I helped grow, and now wants to sell me the furniture they made with it.

The part that stings most is they didn’t even ask. They just assumed they could take everything like it was theirs. The power imbalance is so great, they’ll probably get away with it. ... I imagine there will be a time when using these tools or not creates a rift, and maybe it will be difficult to sustain a career in our field without using them. Maybe something will change, and I’ll come around to using these services regularly. I don’t think I’ll ever not be angry about it.

This is involuntary stone soup at scale. I'm also dismayed about how LLMs came to be, yet aware that the bomb still works regardless of my feelings. I'm convinced I need to understand this technology—I don't think I can afford to simply opt out.

But I'm also staying tuned to skeptical takes, fighting to keep my novelty-seeking brain from falling into cult-like enthusiasm. While I can't dismiss this technology as pure sham, I refuse to swallow inflated claims about what it actually is. I want clear-eyed understanding.

Jason's anger resonates because it points to a deeper loss:

And still that anger. It’s not just that they didn’t ask. If these tools have so much promise, could this have been a communal effort rather than a heist? I don’t even know what that would’ve looked like, but I can say I would feel much differently about AI if I could use a model built on communal contributions and opt-ins, made for the advancement of everyone, not just those who can pay the monthly subscription.

Behind that anger is sadness. How do we nurture curiosity and the desire for self growth?

I believe there's a path forward that can nurture curiosity and growth.

I've seen how these models can surface insights and patterns from overwhelming pools of information—hallucinations are always possible, but it's surprising how often they don't happen. I've seen how their "spicy autocomplete" can help me get where I intended to go faster—like talking to a fellow ADHD'er who sees where I'm going and jumps straight there.

And these models aren't disappearing, even if the companies burning cash do. The models already released openly will power unexpected developments for decades, even if just passed around as warez torrents.

This feels like the dot-com bubble all over again. When that bubble burst, the web didn't die: people with spare time and leftover experience built the blogosphere, API mashups, and the foundations of Web 2.0.

I suspect we're heading for a similar pattern. Maybe it's wishful thinking, but I kind of expect we'll see a bust followed by cheap, surplus capacity that—while not the communal effort we deserved—becomes accessible to anyone who wants to experiment and build something better.

blog comments powered by Disqus
Miscellanea for 2025-06-18  Previous Miscellanea for 2025-06-17 Next