Prompt Engineering GuidePrompt Engineering CoursePrompt Engineering CourseServicesServicesAboutAbout
GitHubGitHub (opens in a new tab)DiscordDiscord (opens in a new tab)
  • Prompt Engineering
  • Introduction
    • LLM Settings
    • Basics of Prompting
    • Prompt Elements
    • General Tips for Designing Prompts
    • Examples of Prompts
  • Techniques
    • Zero-shot Prompting
    • Few-shot Prompting
    • Chain-of-Thought Prompting
    • Self-Consistency
    • Generate Knowledge Prompting
    • Tree of Thoughts
    • Retrieval Augmented Generation
    • Automatic Reasoning and Tool-use
    • Automatic Prompt Engineer
    • Active-Prompt
    • Directional Stimulus Prompting
    • ReAct
    • Multimodal CoT
    • Graph Prompting
  • Applications
    • Program-Aided Language Models
    • Generating Data
    • Generating Synthetic Dataset for RAG
    • Tackling Generated Datasets Diversity
    • Generating Code
    • Graduate Job Classification Case Study
    • Prompt Function
  • Models
    • Flan
    • ChatGPT
    • LLaMA
    • GPT-4
    • LLM Collection
  • Risks & Misuses
    • Adversarial Prompting
    • Factuality
    • Biases
  • Papers
  • Tools
  • Notebooks
  • Datasets
  • Additional Readings
Question? Give us feedback → (opens in a new tab)Edit this page
Techniques
Directional Stimulus Prompting

Directional Stimulus Prompting

Li et al., (2023) (opens in a new tab) proposes a new prompting technique to better guide the LLM in generating the desired summary.

A tuneable policy LM is trained to generate the stimulus/hint. Seeing more use of RL to optimize LLMs.

The figure below shows how Directional Stimulus Prompting compares with standard prompting. The policy LM can be small and optimized to generate the hints that guide a black-box frozen LLM.

DSP

Image Source: Li et al., (2023) (opens in a new tab)

Full example coming soon!

Active-PromptReAct

Copyright © 2023 DAIR.AI