Vibe Coding Is A New Way to Build
“Vibe coding” sounds like a throwaway phrase. You could be forgiven for not taking it seriously. But for many would-be developers, vibe coding offers genuine creative freedom. With vibe coding, the ability to code is no longer a barrier to software development. LLMs now handle most of the complex construction details.
What’s happening here is a democratization of access. You don’t need coding expertise to create; you need the impulse to do so. If you don’t know JavaScript, Cursor, or another programming language, an LLM can handle it. There’s no mystique here; just initiative and a willingness to experiment.
My Foray Into Vibe Coding
My own experience speaks to this. When I had to collaborate with a client on GitHub a few months back, I quickly became acquainted with Visual Studio Code’s integrated terminal. Shortly afterward, I began using Docusaurus to build a professional website. It was surprisingly intuitive to understand what was needed. And with time, I became more adventurous and willing to overcome coding obstacles. Lucille Ball was right, “the more things you do, the more you can do”
Not being able to write React code from scratch was hardly a problem. With LLMs guiding me, I could follow the structure and purpose of each file, even if I couldn’t explain every line. My foray into vibe coding might have ended there if I hadn’t stumbled across an article about assembling tools into a full-stack AI-agentic app. It was intriguing enough that I decided to try it myself, with vibe coding as my go-to.
Going the Extra Mile
Creating a website through GitHub is one thing. Building an IT stack that functions as a full application is something else entirely. Think of it as wiring together a series of virtual components using code.
I suddenly realized the possibilities for connection were endless. Since my ability to modify proprietary software was limited, I started by exploring open-source projects to jumpstart the process. I set up an Amazon S3 account to store future clickstream data in the cloud. Tools I had only heard about, like FastAPI, Render, ngrok, suddenly had a place in my workflow. A whole new world opened up. Without an LLM, wiring all these compnents together would’ve taken months.
But it wasn’t the coding that posed the biggest challenge (Render logs and ChatGPT eliminated that issue). The harder part was thinking through UI decisions. But even here, I'm reviewing whether to rely on Cursor to provide feedback.
The excitement is just beginning: LLMs that pair with Model Context Protocol (MCP) promise to reshape software. Adding MCP capabilities to softwre isn't just an upgrade, it requires a complete rethinking of what software can do. That realization pushed me to explore how I could use these technologies in my own work. I began lying awake at night, thinking through the possibilities of AI-agentic systems and just how far they might go.