A brown, spotted tabby cat looking to the right
October 1st, 2025

Down the LLM rabbit hole

Rants

Vibe coding isn't new. A user describes their needs to an LLM, and then it generates the desired output, the user then checks for inconsistencies and error, then provide feedback. It's basically "asking your subordinates" to do the work for you, the difference is that your subordinate is now an LLM that knows a lot more than a human would.

Not gonna lie, I find vibe coding very useful since I tried it. I ported my entire BearlyJekyll theme(that was modified also thanks to LLM) to Hugo in 2-3 days thanks to the powerful models GitHub Copilot provided. Then when I decided to switch to Arch, I thought, "Let's ask an LLM to help me write a post install script! How did it go? Well, here is the final result.

I first solicited help from GPT 4.1 and GPT 5. But I find that they all produced useless code all too often, some parts of their code are useful, other parts often has bugs and syntax errors. Claude Sonnet and Grok also failed to impress. Then I tried the very costly Google Gemini 2.5 pro. It was amazing. It is not the fastest, but faster than GPT 5. It produced code with least error, and actually works most of the time. And the best part, it cares about coding style, simplicity, and idiot-proofing.

My copilot trial will come to an end very soon, and I will be left with Kagi's standard model offerings. If I want to use Gemini 2.5 pro, I need to subscribe to the most pricey plan Kagi offers ($25/mo), or subscribe to GitHub Copilot. I think I will stay with the standard models Kagi gives me for now. Then, when I earn enough income and need a more powerful LLM assistant, I will upgrade to Kagi Ultimate.

powered by scribbles