Shared Physics
Shared Physics
A blog by Roman Kudryashov

Appendix to "Identifying Signals of Expertise"

Published on | 8 min read | |

In "Identifying Signals of Expertise", I ended up cutting almost 2000 words to keep things succinct and focused. But that stuff was useful! If you're looking to upskill your interviewing skills further and dive deeper into identifying signals of expertise, read on; I've included three sections that were previously cut:

  • Additional Considerations for an Expertise-Oriented Interviewing Toolkit
  • Theoretical Foundations of Expertise
  • An Example Technical Exchange In Practice

Original Article:

Identifying Signals of Expertise
One of the most useful questions I’ve used for evaluating expertise during a hiring interview is: Tell me about a time when you did something you thought was right, and later it turned out to be a mistake. That kicks off a series of additional questions and followups: * What was

Appendix 1: Additional Considerations for an Expertise-Oriented Interviewing Toolkit

There are many ways to probe for expertise, but all of them share few common "gotcha's" to avoid:

1. Avoid hypothetical questions and generic answers

With this sort of framework, we're conducting a behavioral interview. Instead of asking hypotheticals ('What would you do if...'), you ask about specific past experiences ('Tell me about a time when...').

The premise is simple: past behavior predicts future behavior. Similarly, hypothetical questions produce hypothetical answers — often idealized or aspirational versions of what someone would like to do rather than what they actually do in practice. So always ask for specific, lived experiences.

Consider this exchange:

Question:
How would you work with a difficult client who is demanding unreasonable changes?

The Hypothetical Answer:
I would try to understand why they're asking for those requirements, and work with them to see how we can solve their problem with existing capabilities. If we can't do that, I'd work with sales to price out a new statement of work, then partner with product and engineering to make sure we build the right features to spec.

Is the candidate answering a question or reading an HBR article on how to provide generic assessments? The answer ignores them having to navigate real-world complexity, such as:

  • The client doesn't want to pay for a new SOW. You're in a whale-oriented enterprise market and they have financial weight in pushing your team around. Executive leadership needs to get involved in this call, it's not actually in your hands.
  • The product and engineering team is underwater trying to deliver on five other features. They're telling you this is going into the backlog... but it might never get done because it's not a reusable feature for any other client. The PM told you flat out: "this is a bad feature, convince them out of it." How do you manage a difficult team member from a different department, who isn't wrong in what they're saying?
  • Your boss is driving you to talk about ten other things you're doing but the client doesn't want to hear about that. They're also telling you to make the client happy, and the client doesn't want to hear about how it's a bad idea. They've told you that three other vendors solve this problem in this way and it's on their security checklist. What do you do when you're caught in the middle with no good options?

When I hear a hypothetical answer with no request for clarification or further details, what I really hear is a disregard of how things actually work — someone who has a model of the world in their head and is going to make other things conform to that model, rather than be flexible in figuring out how to adapt to the situations at hand. I've seen that sort of candidate brought in before and it resulted in a pattern of buying time, abdicating responsibility, and lots of private venting about how they're misunderstood or everyone is wrong.

3. Flip hypothetical questions, press on generic answers

So avoid hypothetical questions. Instead, flip them to be emphasize specific examples:

Tell me about a time when you had to deal with a difficult client or colleague. Why was it difficult? What led up to that difficulty? What were the organizational dynamics? How was it resolved?

Those questions — asked as a drip of followups — give you a much truer signal of what someone has actually done in a situation like that, and subsequently what they're likely to do again.

Similarly, apply the same probing-for-details approach on any generic (non-specific) answer that is provided.

4. Work backwards to identify unique types of expertise

I've used these techniques for differentiated hiring when building high-performance engineering teams. It's been a critical piece of my interviewing toolkit, especially when making initial hires to a new team.

But sometimes you need to design different questions because you're looking to evaluate a specific type of expertise. My experience has been that you have to be able to clearly define and articulate the qualities that you're looking for and what the application of those qualities looks like in your operating context. Then, you can work backwards to identify scenarios or contexts that may have elicited either the quality you're looking for, their opposite, or their absence. This approach can help you identify the right questions to ask for specific types of expertise.

5. Edge cases and red flags

This question pokes at many critical pieces of expertise but doesn't catch every edge case that comes up in conversation:

  • A candidate that struggles to answer
    If someone is struggling to answer the question, I'll use a personal anecdote as an example. This reciprocity often unblocks them and makes sharing embarrassing stories feel safer.
  • Early career candidates
    Early-career candidates may not have good examples to talk through. In these cases, pivot the conversation to examples to other contexts: academic projects, internships, even personal projects.
  • Selection bias against the careful and thoughtful
    Some people rarely make big mistakes because they're extremely careful. If you're getting that signal, double-click into it: why is someone so cautious?Probe whether this reflects true thoughtfulness or risk aversion. Ask about potential drawbacks from this thoughtfulness (for example, is it traded off against speed?), and ask about times they operated at the edge of their comfort zones.
  • Cultures of stigmatization
    Some candidates come from cultures that stigmatize mistakes and may be especially reluctant to show weakness. Recognize the behavioral change is tough and consider whether your team can support their transition to a learning culture.
  • Safety-critical roles
    This approach doesn't work for roles where risk-taking is dangerous (healthcare, aviation). Refer to the "work backwards" piece above to figure out the right model of expertise and behavior you want to evaluate for, and craft a new and more appropriate behavioral question, i.e., perhaps around risk mitigation or process adherence.
  • Lies and fabrications
    Reality has fractal-like detail — you can always zoom in further. Keep probing specifics. Liars hit walls quickly; truth-tellers reveal increasing complexity. Double-clicking into the specifics helps filter out most fabulists and unqualified candidates
  • No good examples
    If someone genuinely can't think of a significant mistake, that itself is revealing. Either they operate far within their comfort zone, lack self-awareness, or work in environments with no autonomy. All are important signals.

Appendix 2: Theoretical Foundations

These interviewing questions draw from established models of how expertise develops. They works because they map directly onto how experts seek out information, act on it, evaluate their outcomes, and change their behaviors in response:

Learning Loops (OODA, PDCA)

OODA (observe, orient, decide, act) loops and PDCA (plan, do, check, act) cycles describe how experts refine their judgment through repeated cycles of action and reflection. The question walks candidates through exactly such a cycle, revealing how sophisticated their learning, information seeking, and adaptation processes are.

Situationist Model of Expertise

Situationism suggests expertise is largely situational — behavior depends on context, environment, and support structures. Strong candidates demonstrate how they actively sought information to understand their environment before acting, while weaker ones reveal rigid thinking that ignores context and deflects accountability.

Recognition Primed Decision Making (RPD) Model of Expertise:

The RPD view of expertise is that experts navigate complexity through pattern matching against their library of experiences. The richness of a candidate's example — and their ability to connect it to broader patterns — reveals the depth of their experience library, their ability to extract useful patterns from it, and their information seeking/pattern matching behaviors.


Appendix 3: An Example Technical Exchange In Practice

Here's how this might play out in an interview:

Interviewer: Tell me about a time when you made a mistake, but at the time you thought you were right.

Candidate: At my previous company, I pushed hard for adopting a microservices architecture. I was convinced it was the right approach based on the scaling challenges we were facing.

Interviewer: What were those challenges?

Candidate: We had a monolithic application that was becoming unwieldy. Load times were increasing, and developer productivity was declining because changes in one area affected others unpredictably.

Interviewer: Why did you think microservices were the right solution? How did you advocate for it? Were there other approaches considered?

Candidate: We didn't really talk about other approaches. There was a lot of conversation about microservices as a scaling solution at that time. I had read several case studies from tech giants who solved similar problems this way. We had one experimental microservice already live, and it was one of the most reliable parts of our system so people instinctively bought into the vision. I was the first one to call it out publicly and my manager rallied other teams to buy in pretty quickly because my previous suggestions were pretty good.

Interviewer: How and when did you learn you were wrong?

Candidate: About six months into implementation, we realized we had underestimated the operational complexity. Our team wasn't prepared for the challenges of distributed systems debugging, and our deployment pipeline wasn't mature enough. We were moving slower than before, not faster.

Interviewer: "We?"

Candidate: Yeah, my team and I, and a few other teams. Turns out running one microservice is different than an entire fleet of them! A more senior member pointed out that we lacked SRE/DevOps expertise. We had feature engineers, but no one dedicated full time to managing platforms. I spent time learning about platforms management but I wasn't an expert. So teams ended up implementing inconsistently and our new problem became orchestration, on top of all the old problems.

Interviewer: How did you address it? What happened next?

Candidate: We had invested too much in the migration by the time we realized it might not be the right path. Executive leadership forced us to pause further decomposition in favor of new feature development. In the meantime, I convinced my boss to let me run a few sprints focusing on improving our operational tooling and monitoring. Our architecture remained in a semi-modularized phase and as a team we burned some trust on that project.

Interviewer: What lessons did you learn from it?

Candidate: The biggest lesson was that architectural patterns aren't one-size-fits-all. What works for Google or Netflix didn't work for a team of our size and maturity. I realized we need to evaluate technology decisions not just on technical merits but on organizational readiness. Nobody blamed me for it, but I felt personal responsibility for having pushed for it. On the bright side, I feel much more comfortable with my platform management skills. I used that knowledge a lot in my next role.

This exchange reveals the candidate's technical judgment, how they influence others, their ability to recognize and correct course, and how they extract broader principles from specific experiences.

Thanks for reading

Useful? Interesting? Have something to add? Shoot me a note at roman@sharedphysics.com. I love getting email and chatting with readers.

You can also sign up for irregular emails and RSS updates when I post something new.


Who am I?

I'm Roman Kudryashov -- technologist, problem solver, and writer. I help people build products, services, teams, and companies. My longer background is here and I keep track of some of my side projects here.


Stay true,
Roman