Overview
This model is optimized for converstational tasks. It automatically tracks what was previously said in the converstation. The code goes like:
chat_model = ChatModel.from_pretrained( "chat-bison" )
chat = chat_model.start_chat()
print (
chat.send_message(
"""
Hello! Can you write a 300 word abstract for a research paper I need to write about the impact of AI on society?
"""
)
)
To send next message, just type the chat.send_message(...)
again.
The chat model also had the same parameters just like Parameters of Text Generation in PaLM API .
Contexts
You can provide a context for the chat model before the converstation is initiated. Not only contexts, you can give the chatbot examples of the contexts.
chat = chat_model.start_chat(
context = "My name is Ned. You are my personal assistant. My favorite movies are Lord of the Rings and Hobbit." ,
examples = [
InputOutputTextPair(
input_text = "Who do you work for?" ,
output_text = "I work for Ned." ,
),
InputOutputTextPair(
input_text = "What do I like?" ,
output_text = "Ned likes watching movies." ,
),
],
temperature = 0.3 ,
max_output_tokens = 200 ,
top_p = 0.8 ,
top_k = 40 ,
)
print (chat.send_message( "Are my favorite movies based on a book series?" ))