What I'm thinking about AI and LLMs

I'm in a weird place with this current AI wave in the tech industry. I feel like a good chunk of folks would tar & feather me if I wrote anything but a complete denunciation, while another chunk I already blocked during the crypto & NFT craze. I still feel like writing something, though, if only to bounce it off the screen for myself.

The way LLMs have been trained is problematic at best. In an ideal world, we could train these systems with full consent and reciprocal compensation for the intellectual produce they consume. And we could be normal about LLMs as a technology. Instead, we have an extractive regime disrupting the very ecosystems fueling it.

That said, my continuing paycheck seems contingent on developing AI literacy while abiding these unpleasantries. As Upton Sinclair wrote, "It is difficult to get a man to understand something, when his salary depends on his not understanding it." The thing is, I do understand it rather decently - but my mortgage doesn't care, and I can't retire for another 30 years at least. So, I need to "stay current".

Still, I've long been fascinated with both artificial and natural intelligence. As a kid, I loved Tron and Automan and Knight Rider. I read Hofstadter's Gödel, Escher, Bach and Minsky's Society of Mind in high school. I double-majored in Computer Science and Psychology in college. I almost majored in Cognitive Science, but the advanced math made me surly. So I'm admittedly susceptible to enthusiasm toward this technology, though I'm trying to stay grounded.

I think it's hyperbolic to say LLMs are entirely useless. Used carefully, judiciously, and honestly, they can function as labor-saving devices and thought amplifiers. That's a lot of caveats, I know. The enormous question is whether the value outweighs the costs. My hunch is it will, eventually, even if LLMs don't measure up to the hype. A market correction likely lies ahead.

All that preamble aside, here's what I understand so far: LLMs aren't thinking, nor will they conjure an AGI genie from GPU exhaust. They're powerful Chinese rooms that manipulate symbols probabilistically, recontextualizing and reconstituting patterns of language that roughly map to human thought.

And I think that, through the process of training over mind-bogglingly enormous volumes of exemplar language, these models have managed to extract & encode a great many subtle rules of symbolic manipulation which can perform rather convincing and practically useful quasi-thinking functions.

In other words, though they may be "spicy autocomplete", the spice is potent. There's real utility in a system that can accept intentions expressed in ways that demand less of a user to meet the computer on its own turf. That's a real advance in user interface—and I'm especially interested in how this stuff looks as we move beyond chatbots.

When a coding assistant produces code that closely matches what I would have written anyway—and faster than consulting docs and Stack Overflow—that's valuable. Yes, "hallucinations" happen, but with sufficient context, narrowed outputs, and result testing, they're manageable.

That's the key to good LLM use, IMO: results must be tested and verified, ideally via automation but otherwise through human inspection.

Even agentic AI isn't magic. It turns out some of what we do—especially in building software—can be patently rote. With guiding prompts and rich context, an agentic system equipped with an LLM can effectively emulate an iterative development process. It doesn't "know" what it's doing; it's just that countless humans have done this before and written about it. So, it can effectively emulate an iterative process by Mad-Libs'ing its way through with context and feedback.

That said, I don't think pure vibe coding is wise beyond initial exploration. Keep a steady hand on the tiller and take responsibility for the output.

To sum up, I think there's a there there with LLMs and agents—fraught and compromised though it may be. In another universe, I might wait until the dust settles, maybe build furniture instead. But here and now, this stuff genuinely fascinates me. So, I'll likely keep tinkering and writing about it.

blog comments powered by Disqus
Miscellanea for 2025-05-13  Previous Miscellanea for 2025-05-12 Next