What AI Reveals About How We Think

"I know exactly what I want—until I have to explain it to an AI."

This feeling reflects a unique frustration that is becoming more widespread. It's that sensation of clarity vanishing as soon as your cursor starts blinking in the blank prompt box.

The Paradox of Explaining to AI

I have encountered this cognitive paradox countless times myself. Even though I know my subject matter thoroughly, I often struggle to articulate my needs to an AI system in a way that yields the desired result. The knowledge doesn't disappear—yet, suddenly, I must make explicit what has always been implicit in my thinking.

This tension reveals something profound about our relationship with artificial intelligence—a phenomenon researchers now refer to as "the gulf of envisioning." (Subramonyam et al., 2024)

Cognitive scientists exploring how people engage with generative AI found an intriguing phenomenon: even highly articulate professionals often have difficulty formulating effective prompts. This challenge arises because expressing our desires makes us aware of how much of our thinking takes place unconsciously.

It's similar to being asked to describe the exact way you ride a bicycle. You can do it easily, but outlining the specific sequence of tiny adjustments needed for balance is almost impossible.

Why Even Experts Struggle with AI Prompts

What’s particularly intriguing is that this challenge persists regardless of a person's expertise. The marketing director who skillfully briefs a creative team struggles to brief an AI. Similarly, the lawyer who articulates complex arguments to a judge finds herself unsure when communicating with ChatGPT.

Why? Because human communication depends on common contexts, subtle signals, and mutual adjustments that have developed over thousands of years. Without these elements, we must explicitly state what has always been implied in our thoughts.

I’ve noticed that communication professionals frequently have a unique edge in this evolving environment. They routinely simplify intricate ideas into straightforward briefs, identify unspoken client requirements, and clarify vague concepts. These activities refine the very skills essential for successful interaction with AI. Their ability to make the implicit explicit has now become extremely valuable.

The Three Barriers to Effective AI Interaction

The researchers identified three specific barriers that create the gulf of envisioning:

  1. We do not know the boundaries of AI's capabilities — like attempting to navigate a city without a map.

  2. We lack the vocabulary to instruct it effectively — imagine trying to explain colour to someone who has never seen it.

  3. We cannot anticipate what we will receive, like ordering from a restaurant where the menu descriptions are in a language we do not speak.

What intrigues me most about this research is not merely the problem it uncovers, but the insights it provides into our thought processes. Effectively utilising AI necessitates a skill known as "metacognition," which refers to our ability to contemplate our own thinking. This involves clearly expressing goals we may never have articulated before, analysing processes that we usually execute instinctively, and assessing results against standards we have not previously outlined (Tankelevitch et al., 2024).

No wonder it's exhausting.

How Structure Can Bridge the Gap

Yet something remarkable happens when people are provided with structure. In my workshops with executives, I have noticed that offering even a simple framework for AI interaction—a set of questions to consider before writing a prompt—yields immediate relief.

In one session, a participant who initially labelled herself "AI-resistant" was introduced to a framework that encouraged her to reflect on her audience, the desired outcome, and evaluation criteria before crafting a prompt. Her demeanour visibly transformed. What began as a reluctant experiment resulted in multiple successful interactions by the end of our time together.

This structured approach serves as a scaffold for AI interaction. It is not permanent, but it offers the necessary support until users develop their own intuition and confidence.

AI as a Mirror for Our Thinking

Certain technologists believe these frameworks are redundant, arguing that enhanced AI systems will eventually grasp our ambiguous directives more effectively. However, this viewpoint misses a vital point; these structures are not solely intended to yield improved AI outcomes. They primarily focus on cultivating our cognitive processes.

By clarifying our intentions before interacting with AI, we strengthen our metacognitive skills, which extend beyond technology. When we evaluate the disparity between our expectations and the results we obtain, we enhance our capacity for clear self-reflection. In this context, AI transforms from being merely a productivity tool to a mirror, revealing the unconscious processes that shape our thoughts.

The most successful AI users I've seen view the technology not as a substitute for thinking but as a collaborator in that process. They explore deliberately instead of attempting blindly. They build mental frameworks regarding how the AI understands different instructions. They evaluate outputs methodically rather than forming broad conclusions based on isolated interactions.

In essence, they are learning a new language—not one of code or technology, but of their own cognition.

Thus, perhaps the most valuable question isn't how to enable AI to understand us better, but rather how our interactions with AI can aid us in comprehending ourselves. Each clarification we provide in a prompt, each revision of our thoughts, and each reflection on what has worked and what hasn’t—these present opportunities to glimpse the invisible architecture of our minds.

REFERENCE

Subramonyam, H., Pea, R. D., Seifert, C. M., Stanford University, & University of Michigan. (2024). Bridging the Gulf of Envisioning: Cognitive Challenges in Prompt Based Interactions with LLMs. In Preprint. https://doi.org/10.1145/3613904.3642754

Tankelevitch, L., Microsoft Research, Kewenig, V., University College London, Simkute, A., University of Edinburgh, Scott, A. E., University College London, Sarkar, A., Microsoft Research, Sellen, A., Microsoft Research, Rintel, S., & Microsoft Research. (2024). The metacognitive demands and opportunities of Generative AI. Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI ’24), 24. https://doi.org/10.1145/3613904.3642902

Photo by Guilherme Stecanella on Unsplash

Previous
Previous

How Ancient Practices Are Countering Our Modern Attention Crisis

Next
Next

Is AI making us lazy thinkers?