Feb 10, 2026
Software is still about thinking
Head of Design at Cursor
In “Design Careers in the Age of AI”, Hannah Hearth talks about the potential risks of compressed shipping timelines.
So I want to share this article from Ryo Lu (Cursor’s Head of Design).
He says that “coding with AI today creates a new trap: the illusion of speed without structure” 🎯
AI doesn’t replace systems thinking — it amplifies the cost of not doing it.
Ryo has always been one of my favorite systems thinkers to learn from so this breakdown is definitely worth a bookmark 👇
Software is still about thinking
software has always been about taking ambiguous human needs and crystallizing them into precise, interlocking systems. the craft is in the breakdown: which abstractions to create, where boundaries should live, how pieces communicate.
coding with ai today creates a new trap: the illusion of speed without structure. you can generate code fast, but without clear system architecture – the real boundaries, the actual invariants, the core abstractions – you end up with a pile that works until it doesn't. it's slop because there's no coherent mental model underneath.
ai doesn't replace systems thinking – it amplifies the cost of not doing it. if you don't know what you want structurally, ai fills gaps with whatever pattern it's seen most. you get generic solutions to specific problems. coupled code where you needed clean boundaries. three different ways of doing the same thing because you never specified the one way.
as Cursor handles longer tasks, the gap between "vaguely right direction" and "precisely understood system" compounds exponentially. when agents execute 100 steps instead of 10, your role becomes more important, not less.
the skill shifts from "writing every line" to "holding the system in your head and communicating its essence":
- define boundaries – what are the core abstractions? what should this component know? where does state live?
- specify invariants – what must always be true? what are the constants and defaults that make the system work?
- guide decomposition – how should this break down? what's the natural structure? what's stable vs likely to change?
- maintain coherence – as ai generates more code, you ensure it fits the mental model, follows patterns, respects boundaries.
this is what great architects and designers do: they don't write every line, but they hold the system design and guide toward coherence. agents are just very fast, very literal team members.
the danger is skipping the thinking because ai makes it feel optional. people prompt their way into codebases they don't understand. can't debug because they never designed it. can't extend because there's no structure, just accumulated features.
people who think deeply about systems can now move 100x faster. you spend time on the hard problem – understanding what you're building and why – and ai handles mechanical translation. you're not bogged down in syntax, so you stay in the architectural layer longer.
the future isn't "ai replaces programmers" or "everyone can code now." it's "people who think clearly about systems build incredibly fast, and people who don't generate slop at scale."
the skill becomes: holding complexity, breaking it down cleanly, communicating structure precisely. less syntax, more systems. less implementation, more architecture. less writing code, more designing coherence.
humans are great at seeing patterns, understanding tradeoffs, making judgment calls about how things should fit together.
ai can't save you from unclear thinking – it just makes unclear thinking run faster.





