Model based reasoning
Why is all the good Vietnamese food in the North West of Adelaide? Are they racist?
I like living close to my friends. I also make friends easier with people who have similar life experiences to me. I don’t want everyone to be just like me, but I’d like maybe 1 in 3 people to have a similar value system to me (e.g. political views).
Thomas Schelling’s 1971 model shows something weirder: even when people only want 30% similar neighbors, you get 70-90% segregated neighborhoods. Individual preference, collective outcome.
This is model-based reasoning. Instead of arguing about what “causes” segregation, you build a simple model and see what happens.
What’s interesting is I have a concrete idea of what is a reasonable “similar neighbours preference”, but before running the model I had no idea what type of segregation that would cause.
Why this matters now
LLMs make building models trivial. Before, you’d need to:
- Learn a programming language
- Figure out visualization libraries
- Debug for hours
- Maybe give up
Now you can describe what you want and get a working simulation in minutes.
When to use models
Models are useful when:
- Intuition fails at scale. What feels right for 10 people breaks at 1000.
- Feedback loops exist. A causes B causes C causes A. Your brain can’t track this.
- You’re arguing about mechanisms. “Does X cause Y?” is better answered with “here’s a world where X exists, does Y happen?”
Models are useless when:
- You just need data. Don’t simulate customer behavior, talk to customers.
- The model is more complex than reality. If it takes longer to build than to test, just test.
- You’re using it to confirm what you already believe. Models are for exploring, not proving.