I have a confession: I used to be obsessed with elegance. In my early career, I'd spend hours refactoring code to be more abstract, more generic, more theoretically pure. I was the engineer who'd introduce an entire abstraction layer for a problem that hadn't happened yet.

Then I joined a payments company. And everything changed.

The Moment That Changed How I Think

Three weeks into my job at Stripe, I watched a senior engineer spend two days removing abstraction from a critical code path. The code became longer, more repetitive, harder to impress at a conference. It also became something I could read without a flowchart.

"Why?" I asked, genuinely baffled.

"This code processes $2M per minute," she said, without looking up. "When it breaks at 2am, the person debugging it needs to understand exactly what's happening. Not what could theoretically happen."

That conversation is the single most important thing I learned in my first year.

"The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise." — Edsger Dijkstra. The key word is precise, not clever.

Principle 1: Design for the Debugger, Not the Author

Every system you build will be debugged at the worst possible time — 2am, during an incident, by someone who isn't you and hasn't slept in 18 hours. That person is your actual customer. Not the engineer who writes the code. The engineer who has to fix it.

This reframing changes everything. Suddenly, observability isn't a nice-to-have — it's table stakes. A system you can't observe is a system you can't trust. Ask yourself:

  • Can I tell, from the outside, what this system is doing right now?
  • When something goes wrong, can I pinpoint where and why within 5 minutes?
  • Do my logs tell a story, or do they dump data?
  • Are my metrics actionable, or are they just aesthetically pleasing on dashboards?
💡
Practical heuristic: If your error message requires more than 10 seconds to understand, it's not a good error message. Errors should name what happened, where it happened, and ideally suggest what to do next.

Principle 2: Write for Change, Not for Permanence

There's a seductive fantasy in software engineering: that you can design a system so perfectly, it will never need to change. This fantasy is responsible for enormous amounts of wasted time and over-engineered code.

The reality is that change is the only constant. Your requirements will shift. Your scale will change. Your team will turn over. The product will pivot. The dependencies you rely on will be deprecated.

The question is not "how do I build something that won't need to change?" but "how do I build something that's easy to change?"

// Not this — tightly coupled to today's assumptions
func processPayment(amount int, currency string, cardToken string) error {
  // 400 lines of business logic
}

// This — clear seams, easy to evolve
func processPayment(ctx context.Context, req PaymentRequest) (PaymentResult, error) {
  if err := validatePayment(req); err != nil {
    return PaymentResult{}, fmt.Errorf("validation: %w", err)
  }
  // ...
}

The second version is longer. It's also immeasurably easier to test, mock, instrument, and evolve. The extra characters are not verbosity — they're clarity made explicit.

Principle 3: Embrace Boring Technology

Dan McKinley's essay "Choose Boring Technology" is perhaps the most important piece of writing for practicing engineers in the last decade. The core insight is simple: every technology you adopt comes with a cost of understanding. Novel technologies multiply that cost.

When I was at Stripe, our core infrastructure ran on some genuinely old choices. MySQL, plain old HTTP, monorepo, boring deployment pipelines. And they worked. Reliably. At enormous scale. Because everyone understood them deeply.

This doesn't mean you should never adopt new technology. It means you should be deliberate about when the innovation cost is worth paying. The right rule of thumb:

  1. Default to boring technology that has a decade of production use.
  2. Adopt new technology only when there's a clear, specific capability gap you need to fill.
  3. When you do adopt something new, give it time and respect — don't assume you understand it.

The Hardest Lesson: Systems Are Social

Here's the thing nobody tells you about systems design: the hardest part isn't technical. The hardest part is people.

A system lives inside an organization. It has owners, users, stakeholders, and critics. Its design is a reflection of the org chart that built it (Conway's Law is real, and it will humble you). Its success depends not just on technical correctness but on whether the humans around it understand it, trust it, and can evolve it.

The best system design work I've ever done wasn't in a code editor. It was in a document, walking stakeholders through tradeoffs. It was in a whiteboard session, convincing skeptical engineers that a simpler approach was better. It was in an incident review, turning a post-mortem into a learning opportunity rather than a blame exercise.

Write the design doc. Share it early. Invite disagreement. The friction of getting buy-in is cheap compared to the cost of building the wrong thing.


I'm still learning this craft. Every system I build reveals something I got wrong in the last one. But that's the point — not to achieve perfection, but to get incrementally better, one system at a time.

The wandering, it turns out, is the whole journey.