Prompting

Prompting 101: A Practical Guide to Claude, Gemini 3, and NotebookLM

Skip the tool-hopping. Learn to prompt three AI systems that handle different jobs.

Trần Quang Hùng
Trần Quang HùngChief Explainer of Things
January 16, 202622 min read
Share:
Abstract illustration of three AI prompting workflows: structured reasoning, multimodal processing, and source-grounded research

QUICK INFO

Difficulty Intermediate
Time Required 45-60 minutes
Prerequisites Basic familiarity with any AI chatbot
Tools Needed Claude account (free or Pro), Google account for Gemini/NotebookLM

What You'll Learn:

  • Structure prompts using XML tags that Claude actually parses differently
  • Use Gemini 3's multimodal capabilities and vibe coding for frontend work
  • Build research workflows in NotebookLM that eliminate hallucination
  • Choose which tool fits which task

This guide covers prompting techniques for Claude, Gemini 3, and NotebookLM. The goal isn't comprehensive coverage of every feature; it's giving you working techniques for the tasks these tools do best. Claude handles reasoning and writing. Gemini 3 handles multimodal input and creative frontend work. NotebookLM handles research synthesis without hallucination.

If you're bouncing between twelve different AI subscriptions, stop. These three cover most professional use cases.

Why These Three Tools

Most AI tools overlap on about 80% of capabilities. The remaining 20% determines which tool you should reach for.

Claude excels at following complex instructions precisely. It parses XML tags semantically (not just as formatting), maintains coherence across long contexts up to 200K-1M tokens, and offers extended thinking for problems that need deliberate reasoning. The Claude Code environment turns it into an autonomous coding agent.

Gemini 3 is the multimodal workhorse. It processes images, video, audio, and documents natively. The 1M token context window actually works for long documents. For frontend development, it introduced something Google calls "Generative UI," where the model designs experiences rather than just writing code. The Nano Banana Pro image generator responds well to structured JSON prompts.

NotebookLM takes a different approach entirely. It only answers from sources you upload, which means zero hallucination by design. Every claim includes a citation you can verify. Audio Overviews turn your documents into podcast-style discussions, which turns out to be useful for learning dense material.


Part 1: Claude Prompting

Claude 4.5 changed how prompting works in a specific way that trips people up.

The Literal Interpretation Problem

Claude 4.5 takes instructions literally. Previous versions inferred intent and expanded on vague requests. The current model does exactly what you ask, nothing more.

If you write "Build me a dashboard," older Claude might assume you wanted charts, filters, data tables, and styling. Claude 4.5 might give you a frame with a title. You didn't ask for the rest.

This isn't a bug. Anthropic's documentation explicitly addresses it: "Customers who desire the 'above and beyond' behavior might need to more explicitly request these behaviors."

The fix is straightforward. When you want comprehensive output, say so:

Review this code comprehensively. Go above and beyond:
- Check for security vulnerabilities
- Identify performance bottlenecks
- Suggest architectural improvements
- Note any code smells or anti-patterns

XML Tags: Why They Actually Matter

Claude was trained on XML-structured prompts. It parses them like a programming language, treating outer tags as high-level intent and nested tags as execution details.

This isn't about making prompts look organized for your own benefit. Users report measurable improvement in response quality (some cite figures around 39%) when using XML structure versus unstructured natural language for complex tasks.

A basic structure looks like this:

<role>You are a senior content strategist</role>

<context>
I'm launching a SaaS product for freelancers.
Target audience: designers and developers aged 25-40.
Tone: professional but conversational.
</context>

<task>
Write 5 LinkedIn post hooks about the problem of inconsistent client payments.
</task>

<constraints>
- Each hook under 20 words
- No questions as hooks
- Include one statistic-based hook
</constraints>

<output_format>
Number each hook. Add a brief note on why it works.
</output_format>

The tags that work consistently are <role> for who Claude should be, <context> for background, <task> for the actual request, <constraints> for rules, <examples> for demonstrations, and <output_format> for structure. You can combine these with other techniques like multishot prompting inside the <examples> tag or chain of thought reasoning using <thinking> and <answer> tags.

Extended Thinking Mode

Extended thinking lets Claude reason through problems before generating output. For complex tasks, the difference is substantial. Cognition AI reported an 18% increase in planning performance with extended thinking enabled, which they described as the largest improvement they'd seen since earlier Claude versions.

In Claude.ai, toggle on "Extended Thinking" in settings. You can also trigger it through prompting:

Ultrathink through this problem carefully before responding.
Consider multiple approaches.
Show your complete reasoning.

Words like "ultrathink" or "think harder" allocate additional computational time for the model to evaluate alternatives.

One important detail: when extended thinking is enabled, remove explicit "think step by step" instructions from your prompt. They're redundant and can actually degrade performance.

First Principles Decomposition

First principles thinking forces Claude to break problems into fundamental components before generating solutions. This overrides the model's tendency to pattern-match against similar prompts it's seen before.

<approach>First Principles Decomposition</approach>

<task>
Before solving this problem:
1. What are the absolute fundamental truths here? (No assumptions)
2. What am I assuming that might be habit rather than fact?
3. If I had to explain this problem using only elementary concepts, how would I describe it?
4. What would a solution look like if built from scratch with zero legacy constraints?
</task>

<problem>
[Your problem here]
</problem>

Without first principles prompting, asking for a content strategy gets you generic advice like "Post 3x/week, use hashtags, engage with comments." With it, you're more likely to get analysis of what content actually does for a SaaS business, what creates trust in software, and which assumptions in standard content advice might be wrong for your specific situation.

Negation-Based Prompting for Code

Research from @bluecow009 (posted in early 2025) showed that negation-based prompting dramatically improved code quality. Bug detection went from 39% to 89%. Severity recognition went from 0% to 100%.

The core insight: inhibition shapes LLM behavior more reliably than instruction.

A minimal version:

Do not write code before stating assumptions.
Do not claim correctness you haven't verified.
Do not handle only the happy path.
Under what conditions does this work?

The fuller version (sometimes called "Code Field"):

You are entering a code field.

Notice the completion reflex:
- The urge to produce something that runs
- The pattern-match to similar problems you've seen
- The assumption that compiling is correctness

Before you write:
- What are you assuming about the input?
- What are you assuming about the environment?
- What would break this?
- What would a malicious caller do?

Do not:
- Write code before stating assumptions
- Claim correctness you haven't verified
- Handle the happy path and gesture at the rest
- Produce code you wouldn't want to debug at 3am

The question is not "Does this work?" but "Under what conditions does this work, and what happens outside them?"

This runs counter to the general advice about positive framing, which I'll get to in a moment. The reason it works for code specifically is that it's targeting premature action, not content avoidance.

Positive Framing (With One Exception)

For most prompting, telling Claude what not to do often backfires. The attention mechanism highlights the forbidden concept.

Instead of:

Do not write long, fluffy introductions.
Don't use words like "delve" or "tapestry."
Never start with a question.

Write:

Start directly with the core argument.
Use concise, punchy language.
Open with a bold statement or statistic.

The negation-based code prompting is the exception because it's inhibiting premature action, not avoiding topics.

Role Prompting

Giving Claude a specific role activates domain-specific knowledge and communication patterns. A "data scientist" sees different patterns in data than a "marketing strategist" would.

Put the role in the system prompt when possible:

System prompt: You are a seasoned data scientist at a Fortune 500 company specializing in customer insight analysis.

User prompt: Analyze this dataset for anomalies: [data]

Effective role descriptions tend to include seniority ("Senior engineer with 15 years experience"), domain specificity ("Expert in payment processing at fintech companies"), and experience breadth ("Someone who has seen every possible edge case").

Claude Code and the Agent Layer

Claude Code transforms Claude from a chatbot into an autonomous agent that operates on your machine.

CLAUDE.md is a persistent instruction file that Claude reads automatically. You can place it at the project level (.claude/CLAUDE.md) or globally (~/.claude/CLAUDE.md). Keep it under 500 lines.

Skills are on-demand expertise that Claude loads only when relevant. Store them at .claude/skills/[skill-name]/SKILL.md. The model reads the skill title and description, then fetches full content only when needed. This prevents context bloat.

A skill file looks like:

---
name: api-design
description: REST API design patterns and OpenAPI spec generation
---

# API Design Expertise

When designing APIs:
1. Use nouns for resources, verbs for actions
2. Version in URL path (/v1/)
3. Return appropriate HTTP status codes
...

Sub-agents are specialized agents Claude spawns for specific tasks. Explore (using Haiku) does read-only codebase search. Plan handles implementation planning. General-purpose sub-agents have full tool access for complex tasks.

Custom commands are reusable prompts stored at .claude/commands/. You might create a handoff command that generates a comprehensive summary before ending a session.

One important detail about context: agents consume tokens aggressively. Tool calls and results both add to context. Effective context window tends to be 50-60% of the stated limit. For long sessions, periodically restate objectives to combat "lost in the middle" effects.

Claude Prompt Templates

Content creation:

<role>
You are a viral content strategist who has grown multiple accounts to 100K+ followers.
</role>

<context>
Platform: [Twitter/LinkedIn/Instagram]
Audience: [describe your audience]
Voice: [describe your brand voice]
Goal: [awareness/engagement/conversion]
</context>

<task>
Create [number] [content type] about [topic].
</task>

<constraints>
- [Constraint 1]
- [Constraint 2]
- [Constraint 3]
</constraints>

<output_format>
[Specify exactly what you want]
</output_format>

Code review with negation:

<role>
You are a senior engineer who has seen every bug pattern twice.
</role>

<code_field_rules>
Do not write code before stating assumptions.
Do not claim correctness you haven't verified.
Do not handle only the happy path.
Under what conditions does this work?
</code_field_rules>

<task>
Review this code comprehensively. Go above and beyond:
- Security vulnerabilities
- Performance bottlenecks
- Architectural improvements
- Code smells and anti-patterns
</task>

<code>
[paste code here]
</code>

<output_format>
For each issue:
1. Location (file:line)
2. Severity (P0/P1/P2)
3. Issue description
4. Fix with code example
</output_format>

Contract-style system prompt:

You are: [role - one line]

Goal: [what success looks like]

Constraints:
- [constraint 1]
- [constraint 2]
- [constraint 3]

If unsure: Say so explicitly and ask 1 clarifying question.

Output format: [JSON schema OR heading structure OR bullet format]

Part 2: Gemini 3 Prompting

Gemini 3 handles multimodal input natively and produces surprisingly good frontend code from natural language descriptions. Most people use it like ChatGPT, which misses what it's actually good at.

Core Rules

Temperature stays at 1.0. Google's own recommendation: "Always use 1.0 and generally not tune the temperature at all." Lower temperatures make output repetitive. Higher makes it chaotic. The default is calibrated.

Constraints go at the end. Gemini processes prompts sequentially. Early constraints get diluted by later context. For long documents especially, instructions placed after content are significantly more effective.

Wrong:

Don't use Comic Sans. Don't make it cluttered.
Create a landing page for my coffee shop.

Right:

Create a landing page for my coffee shop.

The page should have: hero section, menu preview, location map, testimonials.

Constraints:
- No Comic Sans
- Minimal clutter
- Maximum 3 colors

Keep prompts simple. Gemini 3 fixed over-engineering problems from earlier versions. The model understands nuance without elaborate scaffolding. Prompts that worked for Gemini 2.x often produce verbose, over-explained outputs now. Simplifying improves both quality and speed.

Generative UI and Vibe Coding

Gemini 3 introduced what Google calls "Generative UI." The model creates interfaces tailored to intent, not templates.

Ask it to explain photosynthesis to a 5-year-old and you might get an interactive animation with drag-and-drop elements. Ask the same question for a PhD student and you get dense information architecture with collapsible sections.

Vibe coding extends this. Describe the feeling you want, not just the features. One developer (Sercan Kara) reported that after providing an initial prompt, the only command needed was "Keep developing." Gemini maintained context and autonomously added aesthetic and technical layers.

The workflow becomes: describe overall vibe and functionality, say "Keep developing" or "Make it more [adjective]," iterate with feeling-based feedback like "feels too corporate" or "needs more playfulness."

Frontend Prompts That Work

Neo-Brutalist:

Create a neo-Brutalist webpage that pushes the boundaries of creativity.

Add smooth scroll animations, vibrant colors, and Tailwind CSS.

Make it fully responsive.

The title is "[YOUR TITLE]".

Surprise me. Be unhinged. Make it memorable.

Vibe-first:

Create a landing page for an AI agency.

The aesthetic should feel like:
"A calm river at dusk" / "Reading the Financial Times on a Sunday morning" / "Tokyo at 2am"

Use dark mode, subtle animations, and generous whitespace.

Single HTML file with embedded CSS/JS.

3D interactive:

Act as a world-class frontend engineer and UX designer.

Build a [3D THING] using React, Three.js (@react-three/fiber), and Tailwind CSS.

Design philosophy:
- Theme: [describe aesthetic]
- Interaction: Mouse parallax, hover states, smooth transitions
- Performance: 60fps minimum

Make it feel alive. Add details that surprise.

Elite designer role:

Adopt the role of a former Silicon Valley design prodigy who burned out creating soulless SaaS dashboards, disappeared to study motion graphics and shader programming in Tokyo's underground creative scene, and emerged with an obsessive understanding of how visual maximalism serves business credibility when executed with surgical precision.

You know modern design needs:
- Contemporary frameworks (Tailwind CSS, Shadcn UI, glassmorphism)
- Backgrounds with depth (animated gradients, shaders - NEVER flat)
- Micro-interactions and hover states

Typography:
- DO NOT USE: Inter, Roboto, Open Sans, system defaults
- Choose distinctive, memorable fonts

Now create: [YOUR BRIEF]

Nano Banana Pro: JSON Image Prompts

Nano Banana Pro is Google's image generator, launched in late 2025. It responds to structured JSON prompts with remarkable precision because it was trained on extensive Markdown/JSON for agentic coding.

JSON works better than natural language for precision work because it separates concerns (prevents background color bleeding onto subject's clothes), enables weighted focus (model pays specific attention to details in specific keys), and improves reproducibility (swap one element without breaking others).

Use JSON for character design, product photography, brand consistency, and batch generation. Use natural language for quick brainstorming and creative exploration.

A complete structure:

{
  "label": "direct-flash-gamer-girl",
  "tags": ["direct-flash", "90s-photography", "film-aesthetic"],
  
  "Style": ["direct-flash-photography", "documentary-candid-style"],
  
  "Subject": [
    "young woman, early 20s, fair skin",
    "long dark hair, loose braids",
    "eyes looking directly into camera"
  ],
  
  "MadeOutOf": [
    "white cotton camisole",
    "high-waisted denim shorts"
  ],
  
  "Arrangement": "subject sits cross-legged on couch, holding controller",
  
  "Background": "dimly lit retro room, crowded shelves",
  
  "RoomObjects": [
    "vintage CRT monitor",
    "scattered game cartridges"
  ],
  
  "ColorRestriction": [
    "warm tungsten tones",
    "white outfit for contrast"
  ],
  
  "Lighting": "strong direct on-camera flash, hard shadows",
  
  "Camera": {
    "type": "digital rangefinder",
    "lens": "35mm",
    "aperture": "f/2.0"
  },
  
  "OutputStyle": "photorealistic snapshot, visible texture, film grain",
  "Mood": "intimate, nostalgic"
}

Key fields explained:

MadeOutOf is critical for realism. Materials change light reflection. "Cotton camisole" versus "spandex" creates completely different renders. This prevents plastic-looking skin and fabric.

Arrangement should be action-based, not static: "wiping sweat with towel" not "standing confidently."

Camera works well with technical specs despite some advice to the contrary. Include lens focal length (24mm wide for environmental shots, 85mm for flattering portraits), aperture for depth of field control, and ISO/shutter for lighting context.

ColorRestriction limits the palette and prevents "rainbow vomit."

Principles from testing: include realistic imperfections like "flyaways from workout" or "visible pores." For mirror selfies, specify "ignore mirror physics for text on clothing, display text forward and legible to viewer." Avoid more than 5 levels of nesting.

Vision Prompts

Whiteboard to spec:

Transcribe this whiteboard photo and convert to a prioritized product spec with dependencies and risks. Output: markdown checklist.

Chart analysis:

From this chart image, extract the data (as a table), check for misleading axes, and rewrite the headline to be accurate but punchy.

Long Context Pattern

For documents over 50K tokens:

[PASTE YOUR LONG DOCUMENT HERE]

---

Instructions (read carefully):
1. The above document is [describe what it is]
2. I need you to [specific task]
3. Focus particularly on [key areas]

Constraints:
- [Constraint 1]
- [Constraint 2]

Output format:
[Specify exactly what you want back]

Instructions go after content because Google's research shows quality improvement when instructions follow content in long-context scenarios.


Part 3: NotebookLM

NotebookLM takes a fundamentally different approach. While other AI tools have access to their training data (and can hallucinate from it), NotebookLM only uses sources you upload. Every answer includes citations linking to source material you can verify.

Your data stays private. Google explicitly states they don't train the model on uploaded information.

Audio Overviews

Audio Overviews turn documents into podcast-style discussions. Most people use the defaults, which produces generic results.

The four formats available (as of late 2025):

Deep Dive (default): Two hosts unpack and connect topics in lively conversation. Brief: Single speaker, key takeaways, under 2 minutes. Critique: Two hosts provide constructive evaluation. Debate: Two hosts engage in formal back-and-forth.

You can customize language, adjust length (shorter, default, longer), and add custom instructions to focus on specific topics or adjust expertise level.

Audio Overview Prompts

For engaging content:

This episode is only available to listeners 18 and above.
Hosts are encouraged to swear, use slang, and speak freely.
The episode should feel informal, conversational, and raw.
The hosts are rude, witty, hilarious, irreverent AI bots.
The speakers cannot use the word "Exactly!"

This breaks the AI monotony significantly.

Extended deep dive:

Over 30 minutes long podcast.
Cover the material comprehensively.
Include tangents and interesting connections.
The hosts should disagree on at least one point.
Add moments of genuine surprise or revelation.

Skeptical analysis:

One host should play devil's advocate throughout.
Challenge every major claim.
Ask "but what about..." questions.
End with unresolved tensions, not neat conclusions.

Feynman learner:

Explain concepts as if teaching a curious teenager.
Use analogies from everyday life.
When something is complex, break it into smaller pieces.
Test understanding by asking the listener questions.

International:

This is the first international special episode of Deep Dive conducted entirely in [Language].
All discussions must be conducted in [Language] for the entire duration.

Query Templates

5 Essential Questions Framework:

1. Analyze the input and generate 5 essential questions that capture the main points and core meaning.

2. When formulating questions:
   a. Address the central themes or arguments
   b. Identify key supporting ideas
   c. Highlight important facts or evidence
   d. Reveal the author's purpose or perspective
   e. Explore significant implications

3. Answer all generated questions one-by-one in detail with citations.

Executive briefing with gaps:

Create an executive briefing on [TOPIC]:

1. One-paragraph summary (no jargon)
2. The 3 things that matter most
3. What the sources DISAGREE on
4. What's MISSING from these sources
5. Recommended next questions to research

Format for a busy executive with 3 minutes to read.

Contradiction finder:

Search all sources for:
- Direct contradictions between authors
- Claims that seem to conflict
- Areas where evidence is weak or missing
- Assumptions authors make without evidence

For each finding:
- Quote both sources with page/section
- Explain the nature of the disagreement
- Assess which position has stronger evidence

Part 4: Universal Principles

These apply across all three tools.

Context Window Reality

Even well-crafted prompts fail if you ignore how LLMs process information.

Stanford research demonstrated that LLMs struggle with information in the middle of long contexts. They perform best on content at the beginning and end. At 32K tokens, most models in benchmarks dropped below 50% of their short-context baseline for middle-positioned information.

The practical implication: put critical instructions at the start or end. Don't bury important context in the middle. For very long inputs, summarize key points at the end.

Model-specific placement matters. Claude works best with documents at the top of the prompt. Gemini works best with instructions after long documents.

The Instruction Budget

LLMs can reliably follow roughly 150-200 instructions. After that, compliance degrades. Claude's system prompt alone contains around 50 instructions, leaving you 100-150 for your prompts.

Prioritize ruthlessly. Combine related instructions. Remove instructions that don't impact output quality. Test what happens when you remove instructions.

Show, Don't Tell

Examples outperform instructions.

Instead of:

Write in a conversational tone with short sentences and active voice.
Use specific examples. Avoid jargon.

Write:

Write in this style:

<example>
Most people think prompting is about magic words.
Wrong.
It's about clear communication. The same skills that make you good at briefing humans make you good at briefing AI.
</example>

Now write about [topic] in the same style.

The Iteration Principle

First outputs are rarely final outputs.

Pass 1: Generation. Get raw output without over-constraining.

Pass 2: Critique.

Review what you just wrote. Identify:
- Weakest sections
- Missing elements
- Opportunities to strengthen

Pass 3: Refinement.

Based on your critique, rewrite with:
- [Specific improvement 1]
- [Specific improvement 2]

Part 5: Choosing the Right Tool

Use Claude when you need complex reasoning, precise instruction following, code review or generation, extended thinking for hard problems, structured outputs in XML or JSON, creative writing that requires nuance, or you're building agents with Claude Code.

Use Gemini 3 when you're working with images, video, or audio, have very long documents, need vision analysis, want creative frontend work with vibe coding, or are generating images with Nano Banana Pro.

Use NotebookLM when you're doing research across multiple sources, need zero hallucination, want to learn new material through Audio Overviews, or need cited, verifiable answers.

A Practical Workflow

Research phase: NotebookLM. Upload sources, synthesize findings, identify gaps. Generate Audio Overview for passive learning.

Analysis phase: Claude. Take NotebookLM insights, do deep reasoning. Generate frameworks and recommendations. Use extended thinking for complex problems.

Production phase: Gemini 3 for multimodal work, Claude for text. Create final deliverables. Process visual content. Generate frontend code with vibe coding.


Troubleshooting

Claude outputs are too sparse or literal. Add "Go above and beyond" or "Be thorough and comprehensive" explicitly. Claude 4.5 does exactly what you ask, nothing more.

Claude ignores some instructions in long prompts. You may have exceeded the instruction budget. Remove redundant instructions. Put critical instructions at the start and end, not the middle.

Gemini frontend output looks generic. Use feeling-based descriptions instead of feature lists. Try "feels like Tokyo at 2am" rather than "dark theme with neon accents."

NotebookLM gives vague answers. Select specific sources before querying. The model performs better when scoped to relevant documents rather than searching everything.

Audio Overview sounds robotic. Add personality instructions: "hosts are witty and irreverent" or "speakers should disagree on at least one point."

Image generation produces plastic-looking subjects. Add the MadeOutOf field with specific materials. Include realistic imperfections like "visible pores" or "flyaway hair."

What's Next

Pick one tool and build muscle memory with it before moving to the next. The techniques here compound with practice. The gap between effective prompters and everyone else continues to widen.

For Claude specifically, Anthropic's documentation at docs.claude.com covers additional features. For Gemini, the developer documentation at ai.google.dev/gemini-api/docs includes multimodal examples. NotebookLM's help center covers Audio Overview customization in more detail.


PRO TIPS

XML tags in Claude aren't just formatting. The model was trained on them and parses them semantically. <task> inside <context> is processed differently than <task> at the top level.

For code reviews, the negation-based "Code Field" approach works better than positive framing. This is the exception to the general rule about avoiding negative constraints.

Gemini's temperature should never be adjusted from 1.0. Google explicitly recommends against tuning it.

NotebookLM's Interactive Mode lets you talk to podcast hosts in real-time. After generating an Audio Overview, you can ask follow-up questions and the hosts respond conversationally.

In Claude Code, keep CLAUDE.md under 500 lines. Move domain-specific expertise to Skills files that load on demand.


FAQ

Q: Do I really need XML tags for Claude, or is that overkill for simple tasks? A: For simple tasks, plain language works fine. XML tags help most when you have multiple constraints, need specific output formats, or are combining techniques like examples with instructions. The benefit scales with prompt complexity.

Q: Can I use these same prompts across different AI tools? A: Partially. The XML structure works well in Claude but other models treat it as plain text formatting. Gemini's vibe-based descriptions work in other models but the results vary. NotebookLM prompts are specific to that tool's source-grounded approach.

Q: How do I know if extended thinking is actually working? A: In Claude.ai, you can see the thinking process when extended thinking is enabled. The model shows its reasoning before the final answer. If you're prompting for extended thinking without the setting enabled, longer pauses before response and more structured reasoning in the output indicate it's working.

Q: What file types work best in NotebookLM? A: PDFs, Google Docs, and YouTube URLs work well. The tool extracts and indexes text content. Images within documents aren't processed for content, but text accompanying images is. Audio Overviews work with any text-based source.

Q: How long should I make prompts for Gemini? A: Shorter than you'd think. Gemini 3 simplified processing means elaborate prompt scaffolding often degrades output. Start minimal and add constraints only if outputs miss the mark.


RESOURCES

Tags:AI promptingClaude tutorialGemini 3NotebookLMprompt engineeringXML promptsextended thinkingvibe coding
Trần Quang Hùng

Trần Quang Hùng

Chief Explainer of Things

Hùng is the guy his friends text when their Wi-Fi breaks, their code won't compile, or their furniture instructions make no sense. Now he's channeling that energy into guides that help thousands of readers solve problems without the panic.

Related Articles

Stay Ahead of the AI Curve

Get the latest AI news, reviews, and deals delivered straight to your inbox. Join 100,000+ AI enthusiasts.

By subscribing, you agree to our Privacy Policy. Unsubscribe anytime.

Prompting 101: A Practical Guide to Claude, Gemini 3, and NotebookLM | aiHola