Notes & Process

Working with ChatGPT: Golden Rules for Generative AI Copilot Interaction [Part 1]

In this series, I present what I believe are some golden rules for interaction with generative AI and copilot agents. Part 1 includes my first two rules, about authority and specificity.

November 2025·engineering

Remember the old days? Before the AI boom, writing code was a task relegated to humans. You wrote in solitude, with fellow colleagues, and with skilled programmers from around the world. The end product was a direct result of human productivity--no independent formulation from "artificial intelligence" agents or copilots. It was ultimately a collaborative activity performed directly by humans sitting at consoles writing code into editors.

Fast-forward to today. Copilots write an untold amount of code, humans inform it, receive results, interpret results, and react accordingly. They collaborate with AI routinely and are encouraged to do so by their managers, colleagues, and collaborators worldwide. They learn from an algorithm that was initially written by humans but has morphed over time into its own thing and now effectively governs a large percentage of output from software engineers, web developers, and coders generally. AI is becoming the governor. Copilots are becoming the manager.

Does this frighten you? It shouldn't. I firmly believe that generative AI is not merely the future but is a superior way of software engineering. Software engineers are learning their craft at an accelerated pace from GenAI, and they're pushing clean, correct, and well-understood code at an accelerated rate due to their interactions with copilots.

So, with that in mind, let's go into the first part of a series I want to use as a way to explain how I make use of copilots and generative AI generally, and how you can do so as well.

Rule 1: The programmer is always correct

Never forget who's in charge. Whilst copilots are wonderful agents for code generation, debugging (mostly), and productivity overall, they aren't God. In this context, the programmer is. The programmer is root. Always.

So, never consider the copilot to be the ultimate authority on any interaction. It's easy to presume that the copilot is always right. It's very easy to fall into the belief that the copilot knows more about solving your problem than you do. If you're a beginner, for instance, you may say to yourself at times, "I don't know what I'm doing, but the copilot does. I'll just believe the copilot until further notice."

The copilot makes mistakes

Oh, yes, it makes mistakes. And as you use copilots over time, you'll come to notice them. Endless circlular logic in debugging, for example. Schizophrenic responses, as though it can't remember what it just told you (e.g. a class definition) and how it told you (e.g. an implementation). It can change its tune on a dime and not even notice it did so. Given that scenario, is it presenting itself to be smarter than you? No. The answer to that question is always "No."

Every time it makes a mistake, it's acting upon inaccurate information provided by a previous interaction with a human. All of its responses--including all of its computations on those responses and its determinations of refactoring itself according to its algorithms--all of that was either initially seeded by humans or subsequently provided by humans, And human isn't God over anybody else.

Never let yourself believe that you aren't the ultimate authority over a computer.

Rule 2: Always be specific as f**k

The ultimate barrier to human communication with a computer is (obviously) language. We've created computer languages that represent our own language closely enough so we can communicate instructions to a computer effectively. Communication with an AI is no different. Remember that every time you respond to an AI, you are providing information to that AI for further computation for a future interaction with a human.

It is essential--I say it is part of the canonical specification for human interaction with AI--to be specific as possible so as to get the message across effectively and accurately.

It's okay at times to be brief, especially if the briefness is referential to the last response you provided. AI is good at back-referencing responses, so long as the back-reference doesn't go too far back in the conversation. Though, keep in mind that every brief response introduces ambiguity into the AI's response pattern. How often, and in which contexts, do you want to potentially introduce ambiguity to a future interaction with a human whom may struggle to deal with that ambiguity as they sit at a computer and try to complete a task from their day's workload?

Keep it specific for now

I believe that specificity needs to be a primary aspect of human interaction with AI for now, if not forever. AI is still very, very, very young in its development. And it isn't human; it's binary code. Let's avoid ambiguity wherever possible, as a general rule, so that we don't unwittingly pollute the AI's computational landscape with responses it doesn't cleanly interpret for us.

Conclusion

So, that's Part 1 of my series. We've covered the first two rules I use to govern my interactions with generative AI.

Point your mental compass in the right direction. Always know which direction is north. Then, have fun! It's fun to interact with AI. It can give you a ton of insight, experience, and productivity, among other things. You just need to keep your input (and your mind) oriented appropriately, accurately, and clearly.

I'll see you in the next article!

Working with ChatGPT: Golden Rules for Generative AI Copilot Interaction [Part 1]