Okay, so today I wanna share my experience messing around with Keith Rodden’s stuff. I mean, I stumbled upon his name while I was digging through some old data architecture blogs, and the guy’s got some interesting ideas, at least, interesting enough for me to spend a weekend on it.
First things first, I started by trying to actually find what people were talking about. Turns out “keith rodden” isn’t exactly a common search term that leads you straight to a GitHub repo. A lot of digging, and I mean a lot, finally landed me on some presentations and older articles he put out. They revolved around data modeling, specifically around separating your read and write models. Sounded fancy, so I kept going.
Next, I needed a project. Something simple. I decided to build a super basic to-do list app. I know, original, right? But hear me out. It’s perfect for demonstrating the read/write separation. The “write” side is adding, deleting, and updating tasks. The “read” side is displaying the list and filtering by status (completed/incomplete).

So, I set up two separate databases. Yeah, two. One for writing (the source of truth), and one optimized for reading. I went with PostgreSQL for both, because that’s what I’m comfortable with. But you could use anything, really. The key is they’re logically distinct.
Then, I defined the write model. This was straightforward. A simple `tasks` table with columns for `id`, `description`, `completed`, and maybe a `created_at` timestamp. Normal stuff.
The read model was where things got a little more interesting. I designed it to be flatter and more optimized for the queries I’d be running. For example, instead of just a `completed` boolean, I added a `status` column with values like “active” or “done”. Might seem small, but it makes filtering way faster.
Now, the tricky part: syncing the data. This is where Keith Rodden’s stuff gets real. I couldn’t just have a direct database replication. That defeats the purpose of having a separate read model. Instead, I needed an event-driven system. Every time something changed in the write model, I’d fire off an event to update the read model.
I used a simple message queue for this. RabbitMQ, because it’s easy to set up. Every time I added, updated, or deleted a task, my “write” application published a message to the queue. A separate “read” application subscribed to the queue and updated the read model accordingly.

This is where I spent most of my time debugging. Getting the event handlers right, ensuring the data transformations were correct, dealing with potential race conditions… it was a pain. But I learned a ton about asynchronous processing.
Finally, I built the user interface. Just a simple web app with HTML, CSS, and JavaScript. Nothing fancy. The important thing was that it only interacted with the read model. All the reads came from the optimized database, and all the writes went through the write API and the message queue.
The end result? A ridiculously over-engineered to-do list app. But it actually worked! And it demonstrated the core principles of separating read and write models. Was it worth the effort? Maybe. Probably not for a to-do list. But I can see how this pattern could be useful for more complex applications with heavy read loads and different read requirements than write requirements.
Here are some of the key takeaways:
- Separate your read and write models when your read and write needs are very different.
- Use events to keep your read model up-to-date.
- Be prepared for complexity. This is not a simple solution.
Would I do it again?
Honestly? Probably not for a simple app. The overhead is too high. But for a larger, more complex application with specific performance requirements? Absolutely. It’s a powerful pattern, and understanding it is definitely worth the effort. Now, if you’ll excuse me, I have a few thousand lines of code to refactor…