6 min read

Building Meowsic: AI-Powered Radio for Everyone


AINext.jsProduct

The idea for Meowsic came from a simple observation: radio is one of the oldest forms of media, yet it has barely evolved in the digital age. Streaming services give you playlists, but they lack the human (or in our case, AI) touch that makes radio special -- the commentary, the surprises, the feeling that someone is curating an experience just for you.

I started building Meowsic in early 2024 with a clear goal: let anyone spin up a radio station with an AI host that has personality, taste, and the ability to read the room. The tech stack was straightforward -- Next.js for the frontend, a combination of language models for the AI DJ, and audio processing pipelines for seamless transitions.

The hardest part was not the AI itself, but making it feel natural. Early versions would awkwardly pause between songs, or the AI would repeat itself. I spent weeks fine-tuning the timing, the pacing of commentary, and building a system that could adapt its energy to the music it was playing.

One breakthrough came when I added context awareness. The AI DJ does not just announce the next song -- it connects them. It might say "That track always reminds me of late drives home" before transitioning into something with a similar mood. These small touches made a massive difference in the listening experience.

Launching Meowsic taught me something important about building products: the technical complexity should be invisible. Users do not care about your model architecture or your audio pipeline. They care about pressing play and feeling something. Every technical decision I made was in service of that moment.