$ cat ~/posts/claude-wont-be-there.mdx

Claude Won't Be There

The next part of the AI bubble is showing: we’re moving from “this can do anything” to “my brother in Christ do not let this thing touch production”. To an extent, it’s definitely warranted. You can build some brilliant security vulnerabilities if you blindly use a LLM to generate code.

I always say that as a software engineer, you’re paid to think and code is an output — but it’s not the only output. You’re here to solve a problem for the business. You’re responsible for that, which means you’re responsible for what’s deployed.

You’re not going to push your own untested code to production. You’re not going to take that random jQuery snippet from a post in 2007 from StackOverflow and send that without checking it. Why should code generated by a LLM differ? You’re responsible.

LLMs might hallucinate security, but when your entire userbase gets leaked, you’re going to be in an incident call, and Claude won’t be there.