May 28 2025

Recently, something happened that made me kind of like, break a little bit. I’m still not entirely sure why this whole thing bothers me so much, but here’s a blog post about it. I happened to ask ChatGPT something for fun. I often do this just to see what it says, and if I feel like it’s right or wrong. I tabbed over to another tab, and the top post on my Bluesky feed was something along these lines:

ChatGPT is not a search engine. It does not scan the web for information. You cannot use it as a search engine. LLMs only generate statistically likely sentences.

The thing is… ChatGPT was over there, in the other tab, searching the web. And the answer I got was pretty good.

I’m not linking to the post because I don’t want to continue to pile on to this person, because it’s not really about them. It’s about the state of the discourse around AI.

I am not claiming that ChatGPT is an amazing search engine. What is breaking my brain a little bit is that all of the discussion online around AI is so incredibly polarized. This isn’t a “the middle is always right” sort of thing either, to be clear. It’s more that both the pro-AI and anti-AI sides are loudly proclaiming things that are pretty trivially verifiable as not true. On the anti side, you have things like the above. And on the pro side, you have people talking about how no human is going to be programming in 2026. Both of these things fail a very basic test of “@gork is this true?” (sorry this is a bad joke, if you’re not terminally 38 years oldon Bluesky, substitute “chat is this true?” (sorry this is a bad joke, if you’re over the age of 40 substitute “is this true?” (sorry humor is one way of coping with stress and writing this post is stressing me out))).

Oh, and of course, ethical considerations of technology are important too. I’d like to have better discussions about that as well. For me, capabilities precede the ethical dimension, because the capabilities inform what is and isn’t ethical. (EDIT: I am a little unhappy with my phrasing here. A shocker given that I threw this paragraph together in haste! What I’m trying to get at is that I think these two things are inescapably intertwined: you cannot determine the ethics of something until you know what it is. I do not mean that capabilities are somehow more important.) But I also know reasonable people disagree with me on that. Let’s talk about it! I mean that for all dimensions of this topic, not solely the capabilities.

Now, it’s not like I expect everyone to come to some consensus on a technology. For example, a lot of people like Rust and hate Go, and a lot of people like Go and hate Rust. That’s fine. That doesn’t bother me. But there’s something different about this that is driving me up a wall.

To be clear, I am not particularly pro or anti AI. Here’s what I currently think:

Anyway, just to be clear, (how many times can you say ‘just to be clear’ in one post, Steve, come on (at what level of nested parenthetical do I implement some sort of interlocutor into my blog like Xe)) I don’t think that if you think LLMs suck at software development, you’re wrong. What I want to be able to do is talk about it in a reasonable way, and figure out why we have different opinions. That is it. But for some reason, being able to do that feels impossible.

I’m going to end up blogging more about AI/LLMs in the near future, to get some more of this stuff off of my chest and to try and be the change I want to see in the world. So expect some of that, probably. But until then, if you happen to have any links to reasonable discussion on these topics, I’d love to see them. Here’s two posts I recently read and really enjoyed thinking about what they were saying:

Thanks. If you want to get mad about this post please just ignore me.


Here’s my post about this post on Bluesky: