Learning comes from shipping and failing. Here are the misses that shaped the bets that followed.
The AI Feature Nobody Used
Context: At Tempo, we built an AI-powered "smart scheduling" feature that could optimize team capacity across projects.
Why we built it: the data was there, the algorithm worked, user research said people wanted it.
What happened: 3% adoption. Teams ignored it.
Why it failed:
- Solved a planning problem when execution was where the pain lived.
- Required too much setup (calendars, priorities, constraints).
- Felt like it was managing people, not helping them.
What I learned: AI products fail on adoption, not accuracy. Better question: what's the smallest behavior we can automate that people already do manually?
Documenting the next one
There's always another experiment in the backlog. I'm documenting the next one now.
If you're curious about the messier drafts, let's talk.
Want help shipping the next chapter?
If you're navigating an enterprise pivot, scaling a mature product, or validating an AI bet, I can help you connect the dots between strategy and execution.