Conversation Modes as Intent Framing
I use Generative AI tools regularly for a variety of tasks, to the point where I’ve established a simple taxonomy of conversation modes. The modes are: 1) Teaching; 2) Research/Validation; 3) Writing/Creative; and 4) Review/Audit. Each conversation mode represents a distinct conversation goal.
In Teaching Mode, the goal is to explain, to build some scaffolding, to clarify a concept, or to identify relevant examples.
In Research/Validation Mode, the goal is to check claims, separate evidence from inference, and surface uncertainty.
In Writing/Creative Mode, the goal is to develop ideas, framing, outlines, work with voice, and explore creative possibilities.
In Review/Audit Mode, the goal is to diagnose strengths, gaps, risks, and alternatives.
The purpose of defining these modes is to more clearly guide the AI by sharing my conversation intent upfront. I’ve embedded these conversation modes in ChatGPT Plus custom instructions. The result is that if I start a creative writing conversation, the AI focuses on the goals of creative writing and does not bring in unrelated context from other types of work.
Why Long AI Conversations Break Down
Regardless of the mode selected, and acknowledging that modes can overlap, longer AI conversations tend to degrade over time. The issue is unmanaged context. Selecting a conversation mode narrows context at the outset, but many of the most valuable conversations are meandering. They begin with one intent and evolve unpredictably.
Most current guidance focuses on prompting, which assumes short exchanges and treats iteration as a sequence of follow-up prompts to improve responses. Prompting alone does not address the dynamics of extended conversations.
How can we design conversations with AI?
Three Failure Modes
I’ve experienced three distinct failure modes in long conversations.
Path loss: Each turn commits to one direction among several possible paths. Alternative directions are left behind and become difficult to recover. At times I pursue a clarification knowing it is a temporary detour, while the AI interprets it as a new primary direction.
Context overload: Over multiple turns, accumulated content becomes unstructured and difficult to navigate.
Premature convergence: Following a single linear path, often reinforced by the AI’s tendency to affirm coherent reasoning, can lead to closing a conversation without exploring alternatives or validating key assumptions.
Context as the Missing Structure
If context is the underlying issue, the question becomes how to make it visible and manageable.
The paper Mixed-Initiative Context: Structuring and Managing Context for Human-AI Collaboration explores this problem in depth. It treats conversation context as something that can be actively structured and manipulated rather than passively accumulated. The authors propose an interface where conversations can branch into subtopics, where parts of the history can be selectively excluded, and where both the user and the AI participate in organizing the interaction.
The approach resonated with me because of its close relationship to concept mapping, insight mapping, and conversation mapping, which I continue to explore. Without access to such an interface, I began to consider how a similar approach could be approximated within a standard chat environment.
Approximating Structure in Standard Chat
This led me to develop a lightweight conversation protocol. I do not rely on fixed commands or special syntax. These are natural language interventions used selectively when the conversation begins to drift.
Conversational parking lot: I might say, “Park: Draft a blog post about this topic.” This captures and defers a thread without losing it.
Validation hold: I might say, “Let’s return to this argument for validation.” This marks a claim or direction that requires verification before proceeding.
Summarization checkpoints: I often summarize at the end of a conversation, especially to capture insights in my Personal Knowledge Management system. My PKM uses TiddlyWiki, and I have set instructions so that when I enter (tw:), the output is formatted appropriately. Summaries can also be used mid-conversation. For example: “Summarize the conversation so far with a focus on what we’ve decided to pursue,” or “List any parked items.”
I do not expect the AI to decide what to park, validate, or summarize. I remain responsible for guiding the conversation. The AI assists by structuring and tracking what has been identified.
A Short Example in Practice
In earlier reflections on my use of GenAI, I noted the need to slow down and introduce pauses. This protocol adds more specific types of pauses.
Here is a shortened example, showing the intervention points:
User: I’m thinking about a blog post on improving long AI conversations. It might connect to that paper on context.
AI: You could frame it around prompting techniques, or focus on conversation structure.
User: Yes, structure feels right. Though I also want to include a concrete example, maybe from my own use. (Conversation continues for a few turns, mixing ideas)
User: Pause and review. I think we have two threads: 1. framing around conversation design 2. example from practice Let’s focus on the first. Park the example for later.
AI:
• Active: framing conversation design
• Parked: example from practice
User: I want to bring in the paper, but I am not sure I am representing it accurately. Let’s put a validation hold on the paper’s main claim before we build around it.
AI: Here is a concise restatement of the paper’s core idea. (Conversation continues, now more focused)
User: Pause and summarize what we’ve established so far.
AI:
• Framing: shift from prompting to conversation design
• Mechanisms: parking, validation, summarization
• Pending: example (parked), confirm interpretation of paper
The conversation has not become simpler. It has become more structured.
Limits and Tradeoffs
I will need to test this approach over time. I tend to introduce structure, but not all of it persists. If the cognitive overhead is too high or the benefits are not clear, I may not apply the protocol consistently.
This approach requires discipline and intentionality. It is not necessary for short or low-stakes interactions. If the need does not arise frequently, the practice may fade. This is a minimal attempt to address some of the challenges discussed in the paper within the constraints of current tools.
From Prompting to Conversation Design
I continue to teach prompting, and it remains a necessary skill. For extended interactions, prompting alone is insufficient. As conversations grow in length and complexity, users must actively manage context. This reflects a shift from crafting individual inputs to shaping the trajectory of a conversation.
Given the pace of change in AI tools, this approach may have a limited lifespan. Interfaces may evolve to incorporate these capabilities directly. For now, it offers a practical way to improve the quality of long, exploratory conversations.


