You can improve LLM response by providing examples (shots) in the prompt for better accuracy. For example:

prompt = """Decide whether a Tweet's sentiment is positive, neutral, or negative.
 
Tweet: I loved the new YouTube video you made!
Sentiment: positive
 
Tweet: That was awful. Super boring 😠
Sentiment:
"""
 
print(generation_model.predict(prompt=prompt, max_output_tokens=256).text)

The prompt above gives one example, so you may call it as a one-shot prompt.

There are typical prompts based on shots given:

  • Zero-shot prompt: No examples are given, just instruction. This lead to more creative answer compared to another.
  • One-shot prompt: A single examples are given. More precise than zero-shot prompt.
  • Few-shot prompt: A few examples are given. Suitable for complex tasks.