Cascading Constraints
What we learned building a planning system.
As this is the first piece like this I’m posting, a quick introduction feels relevant.
I’ve jumped around, geographically and professionally. From Serbia to Austria to China, Thailand, and eventually Indonesia (on a weak passport, no less). From political science to teaching English to technical support to, finally, becoming a travel expert.
That last shift is what led me to build the system I’m describing here and, unintentionally, to develop the thinking behind it.
I’m not an AI researcher or an engineer. I am, however, deeply curious and comfortable using the tools I need to answer questions that matter to me. This system started as something I “vibe-coded” to save time. Curiosity got the better of me, and that turned into research, a provisional patent, and a lot of thinking, probably influenced by my time studying political science.
This piece is my attempt to think through why planning systems struggle when feasibility comes last, and what changes when constraints come first.
Generation is the default until “wrong” is deterministic.
What I’ve learned so far is that most AI product conversations start in the same place: generation. Understandably so, it’s a generative, predictive tool. We generate candidate outputs, then validate, fix if necessary, or regenerate until it looks right. That workflow makes sense for many creative tasks. It makes sense when “wrong” is subjective.
But what happens when “wrong” is deterministic?