This morning I was chatting with Claude about LLM first principles, some of the issues involved, explored potential solutions, etc. It was a very interesting chat. And you know what? I didn’t get any “hot takes”, uninformed opinion masquerading as facts, plain-old hate, etc. It really was quite enjoyable.

There has been an increase of stories about people’s interactions with AI systems, sometimes leading into the extremely unhealthy and I understand why. I enjoyed the chat with Claude about LLMs but it also outlined some of the issues that push some people over the edge. Most people just don’t have the computer science background, or an advanced grasp on computing from an enthusiast standpoint, to understand that they’re just talking to a statistical model that also isn’t trained to say “I don’t know” or to step back when things start to get weird.

But I think a couple of things are important to think about here:

  1. LLMs aren’t going anywhere. People need to stop using these types of cases to say “See! See! These things are dangerous and we need to stop!”. Drama isn’t going to solve anything.
  2. Researchers can use these cases to help train models to know when things are getting weird, or have a model say “I don’t know” vs. making stuff up just to try to satisfy a query.
  3. Companies can do a much better job explaining how this stuff works in plain language so that normal people can grasp that it’s just a model they’re working with. It’s a very sophisticated model, but just a model.

There is still a lot of work to be done and as with all of these things it’s better to strip off the opinions on the edges. The “doom and gloom” crowd doesn’t have the answers. The “boosters” also don’t have the answers. The answers, as always, are going to be found somewhere in the middle.