Shared Physics
Shared Physics
A blog by Roman Kudryashov

Identifying Signals of Expertise

Published on | 14 min read | |

One of the most useful questions I've used for evaluating expertise during a hiring interview is:

Tell me about a time when you did something you thought was right, but later it turned out to be a mistake.

That kicks off a series of additional questions and followups:

  • What was the context?
  • Why did you think you were right and how did you advocate for it?
  • How and when did you learn you were wrong?
  • How did you address it?
  • What lessons did you learn from it?
  • What needed to be true for you to have been right?

This drip of questions typically takes 10-15 minutes of interviewing time. It's a variation on the "tell me about a time you changed your mind about something" question, but provides very different signals. Importantly, it is not a "tell me about a mistake you've made" question, which looks similar on paper but misses the point. And that point is: when did a candidate do something that at the time seemed right enough to them and to others, and only in hindsight was revealed to be the wrong approach in some critical way?

Here's what it does, why it works, and the signals it unpacks:

1. It interrupts interviewing autopilot

It's an uncommon framing that breaks common interviewing patterns, forcing people to think in real time — something AI and memorized answers struggle to fake. I sometimes open the interview with this question to set the tone for an authentic conversation and get someone off of performance mode.

2. It checks for introspection and situational awareness

Introspection and situational awareness are critical pieces of expertise: thinking about yourself, understanding your behaviors, putting them in context, and changing based on what you learn. This question pokes at that mechanism; everyone's been wrong before, but not everyone reflects on it or adapts their behavior in response.

Furthermore, a breadth of lived experience suggests some accumulation of mistakes, errors, and general wrongness along the way. All things considered, it's extremely unlikely you're interviewing someone who is perfect (possible but not probable). This is a good thing! Mistakes and errors are an inevitable part of growth, and these questions aim to uncover examples (and awareness) of such growth. Probing into what a person did after they learned they were wrong helps you understand their ability to react to new information that might be different from what they already had in mind — a good signal into their decision-making and information-seeking behaviors.

3. It identifies level-appropriateness thinking

The magnitude of the mistake should match the seniority of the role. A senior architect who's never made anything worse than a syntax error either hasn't been given senior-level responsibilities, lacks good feedback systems, or lacks the self-awareness to recognize their strategic missteps. Conversely, a junior developer who talks about betting the company on the wrong database is either inflating their actual influence or has been working in an extremely immature company. The sweet spot is when candidates describe mistakes that match their claimed level of responsibility — senior folks should have examples involving architecture, strategy, or team direction. Their examples should show they understand the weight of irreversible decisions and have lived with the long-term consequences of their choices. If they haven't, they're probably not as senior as they claim.

4. It evaluates potential vs. actual bounds of expertise

The framing of the question seeks out an interesting situation: the candidate was allowed to take on some work, but ended up being wrong in some way about it. This situation describes the upper bound of a candidate's expertise at some problem set at a point in time.

That they were allowed (or assigned) a certain level of work means they had organizational trust to take it on (most people assign work to meet a person's level). That they did not meet that goal in some critical way means the work had some elements beyond the candidate's capabilities at the time. That's their local upper bound at the moment, the level between their perceived expertise and their actual expertise. The difference between the two shows how close (or far) they are from closing that gap.

Of course, not every example sends this signal. A strong followup to probe whether it's a true signal is: "Was this kind of decision/project representative of the work you were doing at the time?"

5. It also describes the candidate's operating environment

When interviewing engineers, the error they describe making is a good baseline for the level of autonomy and trust they had in making decisions. It's the difference between "my error was a bug" vs. "I prototyped a production system on Google Apps Script and was stuck maintaining the prototype for a year" vs. "I chose the wrong language for the project."

How such errors were caught and corrected says a lot about the support systems around them — whether they learned early through mentorship or only after consequences. It's the difference between "… and my manager caught it in review and told me why this was wrong and explained the right way of doing it" vs. "… and we shipped that code and six months later we had to rewrite the whole damn thing because it became unmaintainable under pressure." That typically gives me a range of whether the candidate is a generalist who can do a lot on their own or someone who really thrives on a team with specialization and well-defined roles/responsibilities.

6. It explores comfort with learning

At Amazon, there's a leadership principle that goes:

Leaders are right a lot. They have strong judgment and good instincts. They seek diverse perspectives and work to disconfirm their beliefs.

In building a high-learning team, we flipped this on its head: team members are allowed to be wrong, a lot. But they're wrong in constantly new ways. They test boundaries, push their capabilities, and experiment. The only "sin" in a learning environment is repeating the same mistakes over and over.

Given that growth comes from working at the edges of your expertise, my corollary is a belief that you learn more from mistakes than successes. Our team's operating philosophy was to lean into mistakes as learning opportunities and good signals to check our understanding and assumptions around a problem. So having someone comfortable with airing out embarrassing details and thinking critically about them was a good cultural signal.

7. It pokes at confidence and ambition

It's critical to understand if candidates will speak up and be willing to be wrong in public ways — their confidence in their expertise and standing among peers. How candidates advocated for their (wrong) ideas shows their willingness to speak up and defend what they thought was right — critical for healthy technical discussions. It's a reliable signal for both personal confidence (speaking up) and technical confidence (details of their solution). Someone's level of confidence to put themselves out there is generally a good sign of someone's tolerance for risk and potential for growth.

However, you don't want to bias your interviewing for overconfidence. Double-click into why someone thought they were right and why they were willing to defend that position. The useful followup around this is: Why did you believe something? What path of analysis or thinking got you to that place, and got you to defend that position?

As a corollary to unpacking a candidate's operating environment, a good followup is: "Was it normal for suggestions on how to solve a problem to come from the team or was there something unique in this situation?"

8. It shows how someone generalizes information

I care about what they take away from the error. Some candidates learn great lessons; others have takeaways I would have facepalmed myself over, had I not been on camera. Do they view mistakes as valuable learning opportunities or as failures to be minimized? Do they generalize a lesson or walk away with a narrow, situation-specific takeaway?

The prompt for "bigger mistakes" usually has a stronger correlation to more interesting lessons. A memorable exchange with one candidate was about how a technical solution that worked for them at one company failed when they applied it to another company — all sorts of healthy discussion around that!

Following up with "... and what might have needed to be true for you to be right?" probes someone's openness to change and where they place responsibility. It's the difference between "other people should have been different" or "I should have known to check for X details first."

9. It reveals individual contribution (not team achievements)

One of the most critical signals this question uncovers is the difference between individual expertise and organizational expertise. Many candidates unconsciously slip into "we" language when describing their work: "We decided to implement microservices," or "We realized the approach wasn't working."

This matters because you're hiring an individual, not their previous team. When you hear "we," always follow up with clarifying questions: "To clarify, you personally made that decision?" or "What was your specific role in realizing the approach wasn't working?"

The best candidates can clearly articulate their personal contributions while still acknowledging team dynamics. They'll say things like "I advocated for the approach, and convinced the team because..." or "The team was split, but I pushed for X because Y." This precision reveals both their actual expertise and their self-awareness about their role in group decisions.

Watch out for candidates who can't differentiate their contributions from their team's. If pressed for specifics, they either deflect ("It was really a team effort") or may claim credit unconvincingly ("Yes, I did all of that"). Both responses suggest either a lack of individual impact or a lack of honesty — neither of which you want. And if you're not confident in their answer, continue to push into the details; an intricate understanding of the details – and the ability to navigate them – is the lifeblood of expertise.


Additional Considerations for an Expertise-Oriented Interviewing Toolkit

There are many ways to probe for expertise, but all of them share few common "gotcha's" to avoid:

1. Avoid hypothetical questions and generic answers

With this sort of framework, we're conducting a behavioral interview. Instead of asking hypotheticals ('What would you do if...'), you ask about specific past experiences ('Tell me about a time when...').

The premise is simple: past behavior predicts future behavior. Similarly, hypothetical questions produce hypothetical answers — often idealized or aspirational versions of what someone would like to do rather than what they actually do in practice. So always ask for specific, lived experiences.

Consider this exchange:

Question:
How would you work with a difficult client who is demanding unreasonable changes?

The Hypothetical Answer:
I would try to understand why they're asking for those requirements, and work with them to see how we can solve their problem with existing capabilities. If we can't do that, I'd work with sales to price out a new statement of work, then partner with product and engineering to make sure we build the right features to spec.

Is the candidate answering a question or reading an HBR article on how to provide generic assessments? The answer ignores them having to navigate real-world complexity, such as:

  • The client doesn't want to pay for a new SOW. You're in a whale-oriented enterprise market and they have financial weight in pushing your team around. Executive leadership needs to get involved in this call, it's not actually in your hands.
  • The product and engineering team is underwater trying to deliver on five other features. They're telling you this is going into the backlog... but it might never get done because it's not a reusable feature for any other client. The PM told you flat out: "this is a bad feature, convince them out of it." How do you manage a difficult team member from a different department, who isn't wrong in what they're saying?
  • Your boss is driving you to talk about ten other things you're doing but the client doesn't want to hear about that. They're also telling you to make the client happy, and the client doesn't want to hear about how it's a bad idea. They've told you that three other vendors solve this problem in this way and it's on their security checklist. What do you do when you're caught in the middle with no good options?

When I hear a hypothetical answer with no request for clarification or further details, what I really hear is a disregard of how things actually work — someone who has a model of the world in their head and is going to make other things conform to that model, rather than be flexible in figuring out how to adapt to the situations at hand. I've seen that sort of candidate brought in before and it resulted in a pattern of buying time, abdicating responsibility, and lots of private venting about how they're misunderstood or everyone is wrong.

3. Flip hypothetical questions, press on generic answers

So avoid hypothetical questions. Instead, flip them to be emphasize specific examples:

Tell me about a time when you had to deal with a difficult client or colleague. Why was it difficult? What led up to that difficulty? What were the organizational dynamics? How was it resolved?

Those questions — asked as a drip of followups — give you a much truer signal of what someone has actually done in a situation like that, and subsequently what they're likely to do again.

Similarly, apply the same probing-for-details approach on any generic (non-specific) answer that is provided.

4. Work backwards to identify unique types of expertise

I've used these techniques for differentiated hiring when building high-performance engineering teams. It's been a critical piece of my interviewing toolkit, especially when making initial hires to a new team.

But sometimes you need to design different questions because you're looking to evaluate a specific type of expertise. My experience has been that you have to be able to clearly define and articulate the qualities that you're looking for and what the application of those qualities looks like in your operating context. Then, you can work backwards to identify scenarios or contexts that may have elicited either the quality you're looking for, their opposite, or their absence. This approach can help you identify the right questions to ask for specific types of expertise.

5. Edge cases and red flags

This question pokes at many critical pieces of expertise but doesn't catch every edge case that comes up in conversation:

  • A candidate that struggles to answer
    If someone is struggling to answer the question, I'll use a personal anecdote as an example. This reciprocity often unblocks them and makes sharing embarrassing stories feel safer.
  • Early career candidates
    Early-career candidates may not have good examples to talk through. In these cases, pivot the conversation to examples to other contexts: academic projects, internships, even personal projects.
  • Selection bias against the careful and thoughtful
    Some people rarely make big mistakes because they're extremely careful. If you're getting that signal, double-click into it: why is someone so cautious?Probe whether this reflects true thoughtfulness or risk aversion. Ask about potential drawbacks from this thoughtfulness (for example, is it traded off against speed?), and ask about times they operated at the edge of their comfort zones.
  • Cultures of stigmatization
    Some candidates come from cultures that stigmatize mistakes and may be especially reluctant to show weakness. Recognize the behavioral change is tough and consider whether your team can support their transition to a learning culture.
  • Safety-critical roles
    This approach doesn't work for roles where risk-taking is dangerous (healthcare, aviation). Refer to the "work backwards" piece above to figure out the right model of expertise and behavior you want to evaluate for, and craft a new and more appropriate behavioral question, i.e., perhaps around risk mitigation or process adherence.
  • Lies and fabrications
    Reality has fractal-like detail — you can always zoom in further. Keep probing specifics. Liars hit walls quickly; truth-tellers reveal increasing complexity. Double-clicking into the specifics helps filter out most fabulists and unqualified candidates
  • No good examples
    If someone genuinely can't think of a significant mistake, that itself is revealing. Either they operate far within their comfort zone, lack self-awareness, or work in environments with no autonomy. All are important signals.

The Theoretical Foundations

These interviewing questions draw from established models of how expertise develops. They works because they map directly onto how experts seek out information, act on it, evaluate their outcomes, and change their behaviors in response:

Learning Loops (OODA, PDCA)

OODA (observe, orient, decide, act) loops and PDCA (plan, do, check, act) cycles describe how experts refine their judgment through repeated cycles of action and reflection. The question walks candidates through exactly such a cycle, revealing how sophisticated their learning, information seeking, and adaptation processes are.

Situationist Model of Expertise

Situationism suggests expertise is largely situational — behavior depends on context, environment, and support structures. Strong candidates demonstrate how they actively sought information to understand their environment before acting, while weaker ones reveal rigid thinking that ignores context and deflects accountability.

Recognition Primed Decision Making (RPD) Model of Expertise:

The RPD view of expertise is that experts navigate complexity through pattern matching against their library of experiences. The richness of a candidate's example — and their ability to connect it to broader patterns — reveals the depth of their experience library, their ability to extract useful patterns from it, and their information seeking/pattern matching behaviors.


Final Takeaways

Will It Work For You?

Before writing this post, I shared an abridged version in a forum of colleagues. Within days, folks began to share stories about what they were able to identify in candidate conversations that wasn't obvious before. Here's an example:

"I used it this week to good effect. The candidate positioned themselves reactively. Their decisions were good, but they didn’t really come across as the protagonist. It indicated a need for a bit more structure, but we’re interviewing for a newer role where we need them to help define what the job actually is."

This technique helped identify not just technical competence but also how candidates operate within organizations—whether they drive initiatives or primarily respond to direction, whether they learn from mistakes or repeat them, and whether they can adapt to your specific context.

Closing Thoughts

To implement this in your own hiring process:

  1. Make the time, set the tone
    Introduce the question early in the interview to set an authentic tone. Make sure you can spend time digging into the details and pulling on the various threads that come up.
  2. Listen actively, follow up
    Listen for signals beyond the technical details of the mistake. Don't make assumptions about the candidate, ask followup questions.
  3. Double down on specificity
    Seek out individual experience. Avoid hypotheticals or generic answers. Seek out the context around the answers.
  4. Compare patterns across candidates for the same role.

Remember, the goal isn't to judge candidates for making mistakes — it's to understand how they process, learn from, and adapt after those mistakes. That capacity for growth and self-correction is often a stronger predictor of success than any perfect track record.

But here's what I've found most surprising after years of using this question: it doesn't just reveal how candidates think — it changes how they engage. By leading with vulnerability (asking about mistakes), you create permission for honesty. The best interviews I've conducted with this question didn't feel like interviews at all. They felt like two people debugging a problem together.

And isn't that exactly what you're trying to predict?


Appendix: An Example Technical Exchange In Practice

Here's how this might play out in an interview:

Interviewer: Tell me about a time when you made a mistake, but at the time you thought you were right.

Candidate: At my previous company, I pushed hard for adopting a microservices architecture. I was convinced it was the right approach based on the scaling challenges we were facing.

Interviewer: What were those challenges?

Candidate: We had a monolithic application that was becoming unwieldy. Load times were increasing, and developer productivity was declining because changes in one area affected others unpredictably.

Interviewer: Why did you think microservices were the right solution? How did you advocate for it? Were there other approaches considered?

Candidate: We didn't really talk about other approaches. There was a lot of conversation about microservices as a scaling solution at that time. I had read several case studies from tech giants who solved similar problems this way. We had one experimental microservice already live, and it was one of the most reliable parts of our system so people instinctively bought into the vision. I was the first one to call it out publicly and my manager rallied other teams to buy in pretty quickly because my previous suggestions were pretty good.

Interviewer: How and when did you learn you were wrong?

Candidate: About six months into implementation, we realized we had underestimated the operational complexity. Our team wasn't prepared for the challenges of distributed systems debugging, and our deployment pipeline wasn't mature enough. We were moving slower than before, not faster.

Interviewer: "We?"

Candidate: Yeah, my team and I, and a few other teams. Turns out running one microservice is different than an entire fleet of them! A more senior member pointed out that we lacked SRE/DevOps expertise. We had feature engineers, but no one dedicated full time to managing platforms. I spent time learning about platforms management but I wasn't an expert. So teams ended up implementing inconsistently and our new problem became orchestration, on top of all the old problems.

Interviewer: How did you address it? What happened next?

Candidate: We had invested too much in the migration by the time we realized it might not be the right path. Executive leadership forced us to pause further decomposition in favor of new feature development. In the meantime, I convinced my boss to let me run a few sprints focusing on improving our operational tooling and monitoring. Our architecture remained in a semi-modularized phase and as a team we burned some trust on that project.

Interviewer: What lessons did you learn from it?

Candidate: The biggest lesson was that architectural patterns aren't one-size-fits-all. What works for Google or Netflix didn't work for a team of our size and maturity. I realized we need to evaluate technology decisions not just on technical merits but on organizational readiness. Nobody blamed me for it, but I felt personal responsibility for having pushed for it. On the bright side, I feel much more comfortable with my platform management skills. I used that knowledge a lot in my next role.

This exchange reveals the candidate's technical judgment, how they influence others, their ability to recognize and correct course, and how they extract broader principles from specific experiences.

Thanks for reading

Useful? Interesting? Have something to add? Shoot me a note at roman@sharedphysics.com. I love getting email and chatting with readers.

You can also sign up for irregular emails and RSS updates when I post something new.


Who am I?

I'm Roman Kudryashov -- technologist, problem solver, and writer. I help people build products, services, teams, and companies. My longer background is here and I keep track of some of my side projects here.


Stay true,
Roman