How to make LLMs work like you think

A practical guide for marketers who are tired of vague advice, flat outputs, and recycled templates.

Rachamim Kennard
By Rachamim Kennard, Updated on April 30, 2025, 5 min read

Most frustration with AI comes from weak prompting.

 

Please don’t blame the model; the issue is what you’re feeding it. Vague inputs will always produce vague results. If your prompt is unclear, your output will be, too, regardless of how powerful the system is.

 

Prompting isn’t a clever phrase typed into a box. It’s how you structure direction, define tone, manage context, specify objectives, and iterate outputs. In real marketing workflows, prompting includes role-setting, persona modeling, project scaffolding, and revision tracking. It’s a cognitive process that touches every part of your thinking.

 

That’s why we built the Spike Prompt Crafting Playbook, not as a list of prompts but as a practical system for structuring effective AI interactions across high-value marketing work. This post introduces the underlying thought process.

Learn how to prompt

How I learned to prompt strategically

I use LLMs daily across various channels, including email, social media, product copy, campaign strategy, and brand messaging. Initially, the results were disappointing. The tone was off, the copy sounded generic, and the messaging didn’t reflect the brand.

 

But I didn’t blame the model. I refined the prompt, and then I refined it again. I tested variations, introduced scaffolding, changed role settings, constrained tone, and tracked outputs. I figured out what caused the model to collapse and what made it work, and then I built a process around it.

 

Spike is an innovative and AI-first company, so I was afforded the freedom to be creative, experiment, refine, and develop fast, accurate, brand-safe, and repeatable systems and processes. The outcome was faster workflows, sharper voice alignment, and fewer channel revisions.

 

Outside of work, the AI conversation was loud and divisive. Some called LLMs the future of creativity, while others viewed them as fueling content mills to flood the web with junk. However, the real issue wasn’t AI itself, but rather how people were using it.

 

Yes, generic AI output is available online. But let’s not pretend that Google SERPs were full of original thinking before ChatGPT. Mediocrity scaled long before AI entered the picture. All that AI did was democratize it.

 

However, there’s a real upside. When anyone can produce content at scale, originality becomes the only thing that stands out. If AI handles the predictable, it pushes marketers to lean into what can’t be automated: creative judgment, distinct voice, and original thinking.

 

That was the shift for me. Everything changed when I stopped using AI to move faster and started using it to think more deeply, clarify ideas, sharpen strategy, and push past the obvious.

 

 

 

Prompting as structured thinking

Many guides treat prompting like a phrasing game, which it isn’t. Prompting is a reflection of your structured decision-making process. The output reflects how well you’ve framed the task.

 

I use a basic five-part framework to guide every creation prompt:

 

  1. Comprehension: What’s the task?
  2. Context: Who is it for, and why does it matter?
  3. Strategy: What’s the goal or intent?
  4. Style: What tone and voice are required?
  5. Synthesis: What constraints or creative boundaries should the model work within?

 

If the output quality slips, it usually means that one of these five layers is missing.

 

 

 

Four shifts that changed my results

These are the four structural fixes that made prompting work across production tasks.

 

 

1. Brief first, then request

Most prompts start with a request. I begin with a concise setup that clearly explains the goal, the audience, the format, the tone, and what the output should achieve, much like I would to a teammate before handing off a task.

 

Instead of jumping straight to:

 

Write a follow-up email,

 

I’ll set it up like this:

 

We’re following up with leads who downloaded the AI playbook last week but haven’t re-engaged. The goal is to revive interest by demonstrating how the playbook addresses common pain points without coming across as pushy or salesy.

 

The format is a concise email (under 150 words) with a compelling subject line and a single clear call-to-action (CTA).

Keep the tone grounded, helpful, and confident like a strategist, not a hype machine.

The value prop isn’t just the playbook; what they can do with it is reduce wasted output, tighten team workflows, and sharpen AI usage.

 

That setup consistently yields sharper and more on-brand output because the model is familiar with your target audience and the precise desired output.

 

 

2. Treat first drafts as data

LLMs output interpretations, not answers. That first draft isn’t final; it reflects how the model understood your prompt. If something’s off, I don’t edit the output. I diagnose the misunderstanding and then revise the input.

 

For example, I once prompted the model to summarize the benefits of a product. Still, the copy came back soft and vague, burying the value behind hedging phrases like “you might find this helpful” or “can improve productivity.”

 

Instead of manually rewriting the copy, I revised the prompt:

 

This is a feature overview for decision-makers evaluating tools.
The first two lines need to highlight time savings and team alignment.
Use confident language to avoid soft qualifiers like ‘might’ or ‘can’.
Keep it direct, not hypey.”

 

That adjustment clarified the output dramatically, without touching a word of the draft itself.

 

 

3. Your voice needs examples

Adjectives like “professional but friendly” are too vague. The model needs to learn from patterns, not just descriptions. I guide voice through structured examples: sentence styles, cadence, phrase choices, and tone samples.

 

Here’s a typical brief:

 

  • Voice: short, precise, mildly skeptical.

 

  • Reference: this landing page intro + this paragraph from our CEO.

 

  • Use: declarative sentences with minimal qualifiers.

 

This keeps the tone aligned and prevents drift across outputs.

 

 

4. Understand the technical foundations

You don’t need to be an AI engineer to finetune the prompt creation process, but knowing a few core concepts will help.

 

Key mechanics I rely on:

 

  • Prompting style: use few-shot when precision matters.

 

  • Role settings: use system prompts to define behavior and user prompts to deliver tasks.

 

  • Controls: tweak temperature to shift from creative to predictable.

 

  • Model choice: LLMs differ in how they interpret structure and constraints.

 

The better you understand what’s happening under the hood, the more you can control the output quality.

Start for free - upgrade anytime

 

What happens when you prompt properly?

Clear prompting transforms results. When you provide the model with its structure, intent, and voice constraints, it stops guessing and starts aligning. Here’s how the same task plays out with and without a proper brief:

 

Basic prompt:

Write social media copy about our scheduling feature.

 

Output:

“Introducing our new scheduling tool! Save time and streamline your workflow.”

 

Improved prompt:

Write LinkedIn copy about our scheduling tool for operational managers who lose 5+ hours weekly to calendar conflicts.

Our voice is direct, practical, and slightly informal. Avoid corporate jargon. Emphasize time savings.

 

Improved output:

“Operational managers: How much of your 40-hour week is lost to calendar chaos?

Our users reclaim hours every week with three specific automations. See how (link).”

 

It’s the same model, but with different inputs, it produced vastly better output

 

Language models work through probability. When the input is vague, the model defaults to what it’s seen, most often bland, generic filler. But the pattern tightens when the input is clear, and the floor rises.

 

This is why tone, structure, and specificity matter. Without them, the model drifts, a phenomenon researchers refer to as semantic drift, and your message becomes diluted.

 

 

How to use iterative prompting

Don’t treat promoting as a one-and-done instruction; use it for what it is: an interactive system. Every output provides feedback on how well the input was processed.

 

Here’s how I work through it:

 

  • Review the output: What landed? What felt off? What’s missing?

 

  • Check the constraints: Were they too broad, or too specific and limiting?

 

  • Refine the examples: Are they still reflecting the tone and structure I actually want?

 

  • Save the prompts: Save iterations so I can track what improved and why.

 

With this approach, I turned prompting into a repeatable creative process. It cut production time in half and made AI viable not just for drafts but also for real copy in campaigns, decks, and messaging strategies.

 

 

 

Turning the process into a playbook

Everything I’ve shared so far came from real workflows, tight deadlines, broken outputs, and the need to get AI working under pressure. Over time, that trial-and-error became repeatable: a method that works.

 

I documented that system and turned it into a tactical, field-tested playbook for getting real results with large language models in real marketing environments. It’s built from real briefs, real work, and real iteration.

 

 

What the Spike Prompt Crafting Playbook delivers

We’re not handing out a list of prompts or templates. You can find plenty of those elsewhere. But what happens when none of them fit what you actually need?

 

That’s where this playbook comes in. It provides a structured system for integrating LLMs into real-world marketing workflows, allowing you to create prompts with clarity, control, and purpose.

 

Inside, you’ll find:

 

  • A five-part framework for structured prompting.

 

  • Role and persona modeling for tone control.

 

  • Troubleshooting tactics to fix vague or off-brand outputs.

 

  • Channel-specific strategies for email, social, content creation.

 

  • Testing protocols to measure effectiveness.

 

It’s not theory; it was built through practice under real stakes and with real deliverables.

👉 Download the Spike Prompt Crafting Playbook

If your prompts have been inconsistent, this will give you structure. If your outputs feel generic, this will show you how to improve them.

 

Better prompting is all about better thinking, and this guide shows you how to do it.

Rachamim Kennard
Rachamim Kennard Rachamim Kennard is a data-driven strategist with a passion for scaling brands through organic channels. With years of experience in SEO, content marketing, and growth experimentation, he now leads Organic Growth at Spike, overseeing content, SEO, and CRO efforts.

Gain Communication Clarity with Spike