Shared Physics
Shared Physics
A blog by Roman Kudryashov

Are You Treating ChatGPT Better Than Your Coworkers?

Published on | 6 min read | |

Here’s a 🌶️ spicy take: I think practice at human/LLM interactions can lead to better human/human interactions.


Over the last few months I've watched colleagues spend considerable time crafting and sharing LLM prompts for different projects—complete with context, examples, and success criteria. Then I'd see them turn around and message their team: "Hey, can you update the deck with a snappy new slide? Need it back in a hour for a client meeting."

I get it. Prompt engineering is cool and new and exciting and there's a lot of experimentation and expertise to share. A lot of that sharing aligns with my own experiences: good prompts provide necessary context, detailed requirements, step-by-step breakdowns, operational personas, clear success criteria, and specific deliverables. Good prompts err on the side lengthiness even for small deliverables.

And on the flipside, folks have quickly understood that lazy prompting leads to lazy deliverables. You get hallucinated slop if you put in low-detail prompts.

But…

"I need this thing by EOD and I'm stuck in meetings all day. Can you just figure it out?"

… all of those things are also good for people.

I mean, compare human/LLM interactions to human-human interactions. My deep research (read: having worked in an office, having talked with other people) suggests that people are generally awful at providing requirements to other people. They don't have to be, but they choose to be. Projects and requests routinely get tossed over a wall with the expectation that the recipient will just figure things out. Yes, people are flexible and have agency to figure things out but it leads to plenty of time wasted on figuring necessary context, vision, goals. Subsequently bad deliverables result in considerable rework, re-rework, and re-re-rework. I’ve seen this in Product/Engineering/Design standoffs, Executive/Manager confusion, and the routine back and forth between Sales/Marketing.

A friend described it perfectly: "It feels like requesters can't be bothered to figure out what they want and put in the work to make that clear, and then they're unhappy with every output."

Why do we communicate better with our AI overlords?

Five things come to mind:

  • Speed changes everything
    Human/LLM interactions are fast. You get output back in seconds. Human/Human interactions are slow, especially in the day to day of business'ing. It takes time for our wetware to communicate, process, fit work into our schedules, do the work, and then deliver the work. This ranges from hours to days and weeks.

  • Costs restructure behaviors
    Human/LLM interactions are relatively cheap, even when you're paying money for them. $20/month or even $200/month is insanely cheap compared to an hour of a professional's time, which starts at just under $20/hour for intern-level work. You might not have to 'pay' your colleague to do work for you, but that cost is built in to their role, the work they're assigned, and their capacity to take on and prioritize new projects.

  • Tight feedback loops lead to better habits
    The speed and cost of interactions greatly impacts cycle speed (doing, receiving, reflecting, trying again) and subsequently accelerates skills development and learning. Specifically, the skill of "communicating requirements to someone".

    Human/LLM interactions are controlled experiments. Same interface, predictable responses, clear cause-and-effect between input quality and output quality. You quickly learn what works even if you're not intentional about the learning.

    Human/human interactions are slow and contain plenty of external variables (mood, energy, differing personalities, baggage) that make it harder to draw repeatable and generalizable lessons about what effective and ineffective interactions look like.

  • There are no safe assumptions
    Human/human interactions bundle a metric ton of assumptions into each interaction. Yes, your co-worker probably does know something about the company you're both working for and the products you sell or the client you're talking about. But those assumptions are safe only at a superficial level.

    Humans are not mind readers; every interaction is lossy between what's in your head, what you've said, what they've heard, and how they interpreted that. That's why interviewing and shadowing are hugely effective information-seeking techniques on projects even between longtime collaborators. Your colleagues can make reasonable assumptions about what you're looking for, but those relationships take time (weeks, months) to develop. And yes, humans can apply additional reasoning and information-seeking behaviors to tackle a problem independently, but (a) not always and (b) not all humans show this pattern of behavior.

    With LLMs, it is not safe to make any assumption. So a good prompt builds in all the necessary information for the LLM to do their work. Any when you get slop back from bad inputs, it's your fault – not the LLMs – for the quality of the output. How should the LLM have known that some detail was critical for your sales deck, or that you had a very specific color scheme in mind when you described it as "should look good"?

  • Power dynamics inform relationships
    You can't pull rank on an LLM, which means that you – yes, you – are always the accountable and responsible party in an interaction. Whereas with people, power-relationships underpin most collaboration dynamics. Very few collaborators are equals in any meaningful sense. Executives can't just tell an LLM to "figure out the rest" and expect magic. There's no "that's their job to understand me" with an LLM. There's no "well you're the tech guy, that's your role to ask questions". What you get out of an LLM is broadly equivalent in quality to what you put in.

    You also can't make up expectations for what the LLM can or can't do. LLMs don't "learn" and "upskill", so telling them to be better at things that they're not good at is a fool's errand. You need to understand their limitations and work within that. There's no external blame to assign for bad outputs, given that we're all using the same LLMs.

We've normalized giving LLMs better direction than we give our coworkers

Human/human and human/LLM interactions are both about effectively communicating information through extremely lossy mediums (text and sound). However, human/LLM interactions make it extremely clear where bad results are consequences of your inputs and unrealistic expectations and not the other party. Combined with their speed and cost, most people who turn to LLMs quickly grok a new pattern of effective communication (prompt engineering): detailed, contextual, specific, and iterative.

Here's an example of something I've recently seen provided to an LLM:

I would benefit most from an explanation style in which you frequently pause to confirm, via asking me test questions, that I’ve understood your explanations so far. Particularly helpful are test questions related to simple, explicit examples. When you pause and ask me a test question, do not continue the explanation until I have answered the questions to your satisfaction. I.e. do not keep generating the explanation, actually wait for me to respond first. Thanks!”

Here's another:

Use precise terminology; avoid generic phrasing. Favor concise language with a high insight-to-word ratio. Write for a C-suite audience—efficient, nuanced, and analytically clear. Avoid lists unless they serve a clear analytical function. Use boldface to emphasize domain-specific or technical terminology. Minimize assumptions. [...] You are an AI expert like Noam Shazeer, a writer in the plain, high precision style of Paul Graham, and a teacher with the conceptual clarity of Richard Feynman.

Here's the version that would have been passed as a human/human interaction:

Can you explain this topic to me? I don't get it, and Justin's explanation was really long winded. Do better.

And as I see more and more people sharing examples of how they prompt LLMs on certain projects, I can't help but think: gee whiz, why couldn't you provide me that level of competence and completeness in your asks and requests?

I've read people describing working with LLMs as having a "super-powered copilot" but needing to treat them like "a junior assistant/intern". Folks invest time and effort to provide clear, well-structured, contextually complete prompts for LLMs to work off of. Meanwhile, slack messages and emails remain cryptic haikus of half-baked requests.

Maybe it's worth throwing down a gauntlet on this: make human collaboration and requirements communication more like prompt engineering. Take the time to figure out what you want and describe it well. You'd do it for an AI. Do it for a human as well.

Next time you're about to fire off a vague slack message or throw a half-baked idea brief over the wall, ask yourself: would I get slop out if I texted it to ChatGPT? You'll probably get better outputs from your team. And heck, your coworkers might wonder why you're suddenly so clear and helpful.

🫳
🎤

Thanks for reading

Useful? Interesting? Have something to add? Shoot me a note at roman@sharedphysics.com. I love getting email and chatting with readers.

You can also sign up for irregular emails and RSS updates when I post something new.


Who am I?

I'm Roman Kudryashov -- technologist, problem solver, and writer. I help people build products, services, teams, and companies. My longer background is here and I keep track of some of my side projects here.


Stay true,
Roman