AI Tools

How to Write Better AI Prompts: A Beginner's Guide

Get better results from AI assistants by learning how to ask for what you want

Trần Quang Hùng
Trần Quang HùngChief Explainer of Things
December 18, 202512 min read
Share:
Illustration of a person writing prompts on a laptop, with abstract representations of text becoming more structured as it flows upward

QUICK INFO

Difficulty Beginner
Time Required 45-60 minutes to read; ongoing practice
Prerequisites Access to any AI chatbot (ChatGPT, Claude, Gemini, etc.)
Tools Needed Web browser, free tier of any major AI assistant

What You'll Learn:

  • How to structure prompts that get useful responses on the first try
  • The difference between vague requests and specific instructions
  • When to provide examples and how many
  • How to iterate when your first prompt doesn't work

This guide teaches you to write prompts that work. It's for anyone frustrated by AI responses that miss the point, ramble too long, or ignore half of what you asked. We'll cover the techniques that matter and skip the academic theory.

What Prompt Engineering Actually Means

The term gets thrown around a lot. It sounds more technical than it is.

Prompt engineering is just writing instructions for AI models. That's it. The "engineering" part comes from the fact that small changes in how you phrase things can produce dramatically different outputs. A prompt that says "write about dogs" will give you something generic. A prompt that says "write 200 words about why golden retrievers make good family pets, focusing on temperament and trainability" will give you something useful.

The gap between those two prompts is what this guide covers.

I should clarify something here: prompt engineering isn't a separate skill from clear communication. If you can write a good email to a colleague explaining what you need, you already have most of the foundation. The main difference is that AI models are more literal than humans. They won't fill in gaps the way a person would. They also won't push back when instructions are contradictory or incomplete. They'll just... try something.

Getting Started

You need access to an AI assistant. Any of the major ones work for learning: ChatGPT, Claude, Gemini, Copilot. The free tiers are fine. The techniques in this guide apply across all of them, though you'll notice each model has quirks. Claude tends to be more verbose. ChatGPT sometimes adds caveats you didn't ask for. Gemini can be terse.

Open a new conversation. That's your practice environment.

One thing before we start: responses vary. You can send the exact same prompt twice and get different results. This isn't a bug. AI models have built-in randomness (controlled by a setting called "temperature"). Don't expect identical outputs when testing.

The Core Problem with Bad Prompts

Most people write prompts like they're texting a friend who already has context. "Can you help me with my project?" assumes the AI knows what project, what kind of help, and what format you want.

It doesn't know any of that.

The AI sees your message and nothing else. No prior relationship, no understanding of your situation, no ability to ask clarifying questions before responding (unless you explicitly tell it to). So it guesses. Sometimes the guesses are good. Often they're not.

The fix is straightforward: provide the context yourself.

The Five Elements That Matter

After testing hundreds of prompts across different models, I've found five elements that consistently improve results. You don't need all five every time. But knowing what they are helps you figure out what's missing when a prompt doesn't work.

Task. What do you want the AI to do? "Summarize" is different from "critique" is different from "rewrite." Be specific about the action.

Context. What background information does the AI need? If you're asking for marketing copy, mention the product, the audience, and the tone. If you're asking for code help, include the error message and relevant code snippet.

Format. How should the output be structured? A bulleted list? A single paragraph? A table? If you don't specify, you'll get whatever the model defaults to, which is often more structured than you want.

Length. How much do you want? "Brief" means different things to different people (and different models). "Under 100 words" is unambiguous.

Examples. What does a good response look like? Showing one or two examples of the style or format you want is often more effective than describing it.

Writing Your First Structured Prompt

Let's build a prompt step by step.

Say you need help writing a product description for an online store. Here's the progression from weak to strong:

Attempt 1: "Write a product description for headphones."

This will produce something generic. The AI doesn't know what headphones, what store, what audience, or what length. You'll get output, but probably not what you need.

Attempt 2: "Write a product description for wireless noise-canceling headphones. The target customer is remote workers. Keep it under 150 words."

Better. We've added context (wireless, noise-canceling), audience (remote workers), and length constraint. The response will be more targeted.

Attempt 3: "Write a product description for the SoundMax Pro wireless noise-canceling headphones. Target audience: remote workers who take video calls. Tone: professional but conversational. Length: 100-150 words. Focus on: battery life, noise cancellation quality, and comfort for all-day wear. Don't include pricing."

This gives the model everything it needs. The output will be close to usable on the first try.

Notice the progression. Each version adds specificity without adding complexity. You're not writing a novel; you're filling in blanks.

Examples Are Worth More Than Descriptions

Here's something I learned the hard way: showing beats telling.

You can spend a paragraph describing the tone you want. Or you can paste an example and say "write in this style." The second approach works better almost every time.

This is especially true for formatting. If you want responses in a specific structure, paste that structure with placeholder text:

Format your response like this:
**Summary:** [2-3 sentence overview]
**Key Points:** [3-5 bullet points]
**Recommendation:** [1 sentence]

The model will follow that template. It's more reliable than saying "give me a summary, then key points, then a recommendation."

How many examples should you provide? One is usually enough to establish a pattern. Two helps if the pattern has variation you want preserved. More than three rarely helps and sometimes confuses things. I haven't tested this rigorously across all models, so your experience may vary.

When Prompts Fail: The Iteration Loop

Your first prompt won't always work. That's normal.

When the response isn't what you wanted, resist the urge to start over with a completely different approach. Instead, figure out what went wrong and adjust.

Common failure modes:

Too long. Add a word count limit. Be specific: "under 200 words" works better than "keep it brief."

Wrong tone. Provide an example of the tone you want, or specify directly: "more casual" or "more formal" or "like a text message to a friend."

Missing key information. The model can't include what it doesn't know. Add the missing context to your prompt.

Too generic. Add constraints. Specificity forces the model away from safe, bland responses.

Hallucinated facts. Ask the model to only use information you provide, or to clearly mark anything it's uncertain about. This doesn't eliminate hallucinations but reduces them.

The iteration usually takes 2-3 rounds. If you're past five attempts and still not getting what you need, the task might be genuinely ambiguous, or the model might not be capable of what you're asking. Both happen.

Role Prompting

Telling the AI to adopt a specific role or persona changes its responses. "You are an experienced copy editor" produces different feedback than "you are a supportive writing coach." The first will be more critical; the second more encouraging.

This technique works, but it's easy to overcomplicate. You don't need elaborate backstories. A single sentence establishing the role is enough: "You are a senior software engineer reviewing code for a junior developer" or "You are a nutritionist explaining dietary changes to someone new to healthy eating."

Some people swear by detailed persona descriptions. In my testing, simple role statements work nearly as well for most tasks. The exception is creative writing, where more character detail can influence voice and perspective in useful ways.

Chain of Thought: Making the AI Show Its Work

For complex reasoning tasks (math problems, logic puzzles, multi-step analysis), asking the model to explain its reasoning step by step improves accuracy.

The phrase "think step by step" or "explain your reasoning" triggers this behavior. You can also structure it more explicitly:

Solve this problem. Before giving your final answer:
1. Identify what information is given
2. Identify what we're solving for
3. Work through each step of your calculation
4. Check your answer

Why does this work? The prevailing theory is that generating intermediate steps keeps the model from jumping to conclusions. Whether that's technically accurate, I'm not sure. The practical effect is clear: step-by-step reasoning produces fewer errors on math and logic tasks.

This isn't useful for simple requests. Asking an AI to "think step by step" about writing a haiku is overkill.

System Prompts vs. User Prompts

If you're using the API or a tool that exposes system prompts, you have two places to put instructions. System prompts set persistent behavior for the entire conversation. User prompts are individual messages.

For most people using chat interfaces, this distinction doesn't matter. You just have the message box.

But if you're building something with the API: put instructions that should apply to every response in the system prompt. Put task-specific details in user prompts. The model treats system prompts as higher priority, so conflicts usually resolve in favor of system instructions. Usually.

Constraints and Guardrails

Sometimes the best way to improve output is to say what you don't want.

"Don't include any technical jargon" keeps explanations accessible. "Don't use bullet points" forces prose. "Don't apologize or add disclaimers" cuts the filler that models sometimes add.

Negative constraints pair well with positive instructions. "Write in an upbeat tone but don't use exclamation points" is more precise than either instruction alone.

A word of caution: models don't always follow negative constraints perfectly. "Don't mention pricing" might still produce a response that alludes to cost. If something absolutely cannot appear in the output, you may need to filter after generation rather than relying on the prompt alone.

Temperature and Other Settings

Most chat interfaces don't expose these settings, but they exist. Temperature controls randomness. Higher temperature (0.7-1.0) produces more creative, varied responses. Lower temperature (0.1-0.3) produces more predictable, consistent responses.

For creative writing, higher temperature often helps. For factual questions or code generation, lower temperature reduces errors.

If you're using an API or a tool that exposes these settings, experiment. If you're using a standard chat interface, you don't have control over this, so don't worry about it.

Troubleshooting

Symptom: Response ignores part of your prompt Fix: Move the ignored instruction to the end of your prompt, or separate it onto its own line. Models sometimes lose track of instructions buried in long paragraphs. You can also try numbering your requirements.

Symptom: Response is much longer than requested Fix: Be more explicit about length. "Under 200 words" works better than "brief." Some models (Claude in particular) run long by default. Adding "be concise" sometimes helps.

Symptom: Response includes information you didn't ask for Fix: Add "only include X" or "focus exclusively on Y." Models tend toward comprehensive responses unless constrained.

Symptom: Response contradicts itself Fix: Break the task into smaller pieces. Ask for one thing at a time. Complex prompts with multiple competing requirements cause confusion.

Symptom: Response feels generic or bland Fix: Add specific constraints or examples. Generic prompts get generic responses. Try adding "avoid clichés" or "use unexpected analogies" for creative tasks.

What's Next

You now have the fundamentals. The next step is practice. Pick a task you actually need done, write a prompt using the structure covered here, and iterate until the output works.

For more advanced techniques, look into prompt chaining (using one AI output as input for another prompt), retrieval-augmented generation (giving the AI access to external documents), and few-shot learning (providing multiple examples to establish complex patterns). Anthropic and OpenAI both publish documentation on these topics.


PRO TIPS

Start every complex prompt with the most important instruction. Models weight the beginning and end of prompts more heavily than the middle.

When asking for revisions, quote the specific part you want changed rather than saying "make it better." "Rewrite the second paragraph to be more conversational" beats "improve the tone."

If a prompt worked well, save it. Building a library of effective prompts for recurring tasks saves time and produces more consistent results than rewriting from scratch.

Use line breaks to separate different types of instructions. A wall of text is harder for both you and the model to parse.


FAQ

Q: Do prompts that work in ChatGPT also work in Claude or Gemini? A: Mostly, yes. The core techniques transfer. But each model has tendencies. You may need to adjust length constraints or be more explicit about format with certain models. The only way to know is testing.

Q: Is there a maximum prompt length? A: Yes, but it's large enough that most people won't hit it. Claude and GPT-4 handle tens of thousands of words. Hitting the limit usually means you're including too much context, not that your instructions are too detailed.

Q: Should I say "please" and "thank you" to AI? A: It doesn't affect output quality in any measurable way. Do whatever feels natural to you.

Q: Does capitalization matter? A: For instructions, no. ALL CAPS doesn't make the model pay more attention despite what some prompt guides claim. For proper nouns or specific terms you want preserved, match the capitalization you want in the output.

Q: How do I stop the AI from adding caveats and disclaimers? A: Ask directly: "Don't include caveats, disclaimers, or warnings." It works most of the time but not always. Safety-related disclaimers in particular are often hard to suppress.


RESOURCES

Tags:prompt engineeringAI promptsChatGPTClaudebeginner guideLLM promptingAI toolsgenerative AI
Trần Quang Hùng

Trần Quang Hùng

Chief Explainer of Things

Hùng is the guy his friends text when their Wi-Fi breaks, their code won't compile, or their furniture instructions make no sense. Now he's channeling that energy into guides that help thousands of readers solve problems without the panic.

Related Articles

Stay Ahead of the AI Curve

Get the latest AI news, reviews, and deals delivered straight to your inbox. Join 100,000+ AI enthusiasts.

By subscribing, you agree to our Privacy Policy. Unsubscribe anytime.

How to Write Better AI Prompts: A Beginner's Guide | aiHola