Notes & Process
Today we dig into the schizophrenic nature of AI and how to tackle it with attention, attention, attention, language contracts, addressing errors immediately whenever they happen, and relating to your copilot.
So far, we've covered the first four rules of interaction with ChatGPT and other Generative AI:
Today I'm going to cover the next two rules (5 and 6) and illustrate how attention, attention, attention is essential in your interactions. A computer's output is only as good as your (and others') input, and GenAI being a product of imperfect algorithms and its interactions with wide swaths of the entire human population, it inevitably comes up with the weirdest response patterns sometimes. As the human in control of your conversation, it's your responsibility to notice the weirdness and respond appropriately.
So, let's get into it.
Basically, what I mean by "schizophrenic" is that it acts one way then acts a different way and doesn't even recognize the difference. Normal human interaction follows a logical progression of thought. AI often doesn't.
Let's say we're talking about the color of the sky. We recognize the color blue as the topic of discussion and continue to reference that color as the conversation continues on that topic. When AI participates in that conversation, it can start with a recognition that the sky is blue, then follow-up with a response down the line as though the sky's color is green, and just like that, it's demonstrated a referential integrity violation with no mention that it's changed its color reference. Humans are confused and recognize the confusion; AI doesn't even know it's supposed to be confused. Schizophrenia.
That happens all of the time in my conversations with AI. I find that they happen more frequently when the topic is either gone on long or involves a lot of branching. It appears to me that the backreferencing causes violations in either of these cases:
Notice a similarity between the two cases? In both, you need to address the error immediately so as to resurface the appropriate context. The longer you wait, the deeper the problem goes and the greater the effect the error has on any future responses / results.
Keywords in this context are ultimately subjective, though there are some basic keywords your copilot will use and expect you to use in its interactions with you.
You can define others also as you go. AI will assign a context to whatever you want to make as a new keyword. Just tell it what you want the keyword to represent in terms of behavior.
Some examples:
<ComponentName> (with or without backticks)functionName() (with or without backticks)file/location/and-with-an.extensionIn time you'll develop a keyword list that suits you. Just keep in mind what words / phrases are most useful to you, because remember, you're in control, and it's your copilot, it's your conversation, it's your productivity tool, it's your conversational agent. You're the trainer, it's the trainee.
Keywords are important because they help you get the job done in a way of speaking that's optimal for you and for your copilot simultaneously. The more you use keywords, the more you'll see them to be akin to keyboard shortcuts. Don't ignore them, embrace them.
Let me know how you use keywords in your productivity flow. I'm always curious to know how to improve my own flow by how others have found theirs to have improved.
That's it for Part 3 of this series. Now we know that:
The more you interact with Generative AI, the more you'll become comfortable with the process. It can be daunting at first to meet a new being and learn its language, and how to present yourself to it. Never forget, though, that GenAI is merely a reflection of the human species. It's quite easy in some ways, and in others it's quite frustrating. But in both types of way, it's very possible to perform work.
Remember, you're always in control.
See you in the next article!