Beyond Generic Results with Generative AI

I was hosting a generative AI training workshop for a design team yesterday when someone asked:

"How can you get non-generic results from generative AI? Everything I’m getting feels fairly basic."

This is a common question that cuts to the essence of working with generative AI—how can we get results that are unique, interesting, and specific to our needs?

I believe there are three key layers to achieving non-generic results with generative AI:

  1. Set context and constraints

  2. Dig deeper

  3. Fuse your thinking with AI

1. Set Context and Constraints

By default, generative AI models, especially large language models, are trained to provide broadly helpful responses. This often leads to generic, concise answers.

To get more specific and relevant results, it's essential to set clear context and constraints for the model.

Here are some examples of useful context and constraints:

  • Goals: "Help me improve the conversion rate of this product page copy."

  • AI Role: "You're a world-leading copywriter who has worked with brands like Apple and Airbnb." You can even approach the same issue using multiple roles—an innovator, a critic, or a legal auditor.

  • Format: "Provide an analysis of our top competitors in a detailed table."

  • Style: "Write this copy in an engaging, relaxed yet professional tone."

  • Length: "Rewrite this product description in five words or less."

  • Insights: "Help me ideate solutions for mid-career mothers who struggle to make professional connections during maternity leave."

  • Examples: "Help me generate product ideas for university students. Below are examples of excellent VC-backed products."

  • References: "Generate ideas based on this research report (PDF attached)."

The more context and constraints you provide, the more likely you’ll get unique and relevant results from generative AI.

For even deeper customization, your organization can develop a custom RAG (Retrieval-Augmented Generation) model that pulls from your proprietary data—such as in-depth product documentation or employee playbooks.

2. Dig Deeper

While context and constraints help, another common challenge with LLMs is conciseness. This is often intentional—short, to-the-point answers save both user time and the resources required for AI inference.

However, when you need to go beyond surface-level answers, digging deeper becomes essential. A simple yet effective technique for this is "Tree of Thought" prompting.

Based on research by Google’s DeepMind in 2023, Tree of Thought prompting involves asking multiple related questions, with each subsequent question digging deeper into one "branch" of the topic.

For example:

Prompt 1: "What are the key issues I should consider when designing a customer community for a sustainable activewear brand?"
Prompt 2: "Dig deeper into Gamification for Engagement. What are the best strategies? Include 10 benchmarks from leading global brands."
Prompt 3: "Explore Lululemon’s Sweat Collective Ambassador Program in detail. What are the core mechanics, and what makes it successful?"

By delving into each branch, you’ll uncover a broader and deeper spectrum of insights. For most topics, especially those you're familiar with, this approach will yield richer results.

Example of Tree of Thought prompting while designing for an online community 

3. Fuse Your Thinking with AI

To effectively use generative AI, it’s important to integrate these techniques into your daily workflow until they become second nature. Still, a significant portion of the unique value will come from your own thinking.

It's unrealistic to expect AI to produce all the necessary insights on its own. Instead, we need to guide the conversation, blend AI’s outputs with our own ideas, and actively engage in the problem-solving process.

Ethan Mollick calls this the "Cyborg" mode of collaboration with AI—an iterative process where human and artificial intelligence bounce ideas back and forth. This contrasts with "Centaur mode," where the division between human work and AI-generated input is more rigid.

Going Beyond the Obvious

As generative AI becomes more prevalent in the workplace, the initial results from language models or image generators will often represent the bare minimum. Achieving standout results requires leveraging context, constraints, and deeper inquiry.

While we become more adept at working with AI, it's essential to remain intellectually curious and proactive—rather than simply becoming copy-paste operators of our new digital assistants.

Matias Vaara

I help teams tap into the power of generative AI for design and innovation.

My weekly newsletter, Amplified, shares practical insights on generative AI for design and innovation.

Previous
Previous

The Rise of the Agentic Customer Journey

Next
Next

From bigger to more thoughtful