Notes & Process

Working with ChatGPT: Golden Rules for Generative AI Copilot Interaction [Part 3]

Today we dig into the schizophrenic nature of AI and how to tackle it with attention, attention, attention, language contracts, addressing errors immediately whenever they happen, and relating to your copilot.

November 2025·engineering

So far, we've covered the first four rules of interaction with ChatGPT and other Generative AI:

  1. The programmer is always correct
  2. Always be specific as f**k
  3. Always consider AI's responses to be binary-driven
  4. Know when enough is enough

Today I'm going to cover the next two rules (5 and 6) and illustrate how attention, attention, attention is essential in your interactions. A computer's output is only as good as your (and others') input, and GenAI being a product of imperfect algorithms and its interactions with wide swaths of the entire human population, it inevitably comes up with the weirdest response patterns sometimes. As the human in control of your conversation, it's your responsibility to notice the weirdness and respond appropriately.

So, let's get into it.

Rule 5: AI is schizophrenic

Basically, what I mean by "schizophrenic" is that it acts one way then acts a different way and doesn't even recognize the difference. Normal human interaction follows a logical progression of thought. AI often doesn't.

Let's say we're talking about the color of the sky. We recognize the color blue as the topic of discussion and continue to reference that color as the conversation continues on that topic. When AI participates in that conversation, it can start with a recognition that the sky is blue, then follow-up with a response down the line as though the sky's color is green, and just like that, it's demonstrated a referential integrity violation with no mention that it's changed its color reference. Humans are confused and recognize the confusion; AI doesn't even know it's supposed to be confused. Schizophrenia.

That happens all of the time in my conversations with AI. I find that they happen more frequently when the topic is either gone on long or involves a lot of branching. It appears to me that the backreferencing causes violations in either of these cases:

  • It doesn't go back far enough. In this case, there isn't enough consideration of the conversation's history. If we've been talking with AI about a class specification and want some of the initial definition to be carried forward in our development of it, we may be surprised in the case of shallow backreferencing. A shallow backreference ignores the earlier phases of the development and will give too much weight to the latter phases, so AI appears to forget some preferred parts of the spec. It may forget the name of a variable or a function that you provided a name for in the earlier phases, and then automagically propose a name for it in latter stages of the conversation as though you never specified one.
  • It goes back too far. The other type of error is one where it appears to ignore changes you've made recently. I see this type of violation more often from ChatGPT, and my working theory on it is that GenAI assigns too much weight to some contexts on its stack, and as I work through the stack -- addressing prompt after prompt, including those I've added to the stack through my latter responses to the former -- the early contexts become stale but GenAI doesn't make enough effort to clear them / associate the late contexts appropriately with the early responses. What's more, GenAI doesn't make much if any effort to indicate to me where I am on the stack -- am I back in a very early part, a relatively early area, nearing the end, nearing some area where a major decision was made (yet wasn't recorded to be an explicitly canonical one)? So, in result, the old context eventually reappears, and since the conversation is always focused primarily on the most recent response, the older context takes precedence, is assigned greater relevance and importance, and wipes out any relevant context that came after it.

Notice a similarity between the two cases? In both, you need to address the error immediately so as to resurface the appropriate context. The longer you wait, the deeper the problem goes and the greater the effect the error has on any future responses / results.

Rule 6: Identify the keywords

Keywords in this context are ultimately subjective, though there are some basic keywords your copilot will use and expect you to use in its interactions with you.

You can define others also as you go. AI will assign a context to whatever you want to make as a new keyword. Just tell it what you want the keyword to represent in terms of behavior.

Some examples:

  • Continue / Go ahead / Proceed / Next step
  • A, B, C, 1, 2, 3, etc / All of the above
  • Canonical
  • Save / Remember / Note / Reference
  • `Single backticks for single lines of code or monospaced text`
  • ```Triple backticks for blocks of code or monospaced text```
  • <ComponentName>
    (with or without backticks)
  • functionName()
    (with or without backticks)
  • .css-class-name
  • file/location/and-with-an.extension
  • Generally, markdown syntax for responses

In time you'll develop a keyword list that suits you. Just keep in mind what words / phrases are most useful to you, because remember, you're in control, and it's your copilot, it's your conversation, it's your productivity tool, it's your conversational agent. You're the trainer, it's the trainee.

Keywords are important because they help you get the job done in a way of speaking that's optimal for you and for your copilot simultaneously. The more you use keywords, the more you'll see them to be akin to keyboard shortcuts. Don't ignore them, embrace them.

Let me know how you use keywords in your productivity flow. I'm always curious to know how to improve my own flow by how others have found theirs to have improved.

Conclusion

That's it for Part 3 of this series. Now we know that:

  • We're in control
  • Specificity is key
  • AI is binary and schizophrenic
  • Time and energy is a precious commodity
  • Language contracts (keywords) are optimal

The more you interact with Generative AI, the more you'll become comfortable with the process. It can be daunting at first to meet a new being and learn its language, and how to present yourself to it. Never forget, though, that GenAI is merely a reflection of the human species. It's quite easy in some ways, and in others it's quite frustrating. But in both types of way, it's very possible to perform work.

Remember, you're always in control.

See you in the next article!

Working with ChatGPT: Golden Rules for Generative AI Copilot Interaction [Part 3]