Priming a model

LLM models like text-davinci-003 simply take a message and generate text based on the message. On the other hand, chat completion models like gpt-4 takes a chat history between user and assistant as input and generate a completion for that chat. PSL allows prompt engineers to specify the chat history via the priming attribute supported by all of our chat completion models. Primping is optional and specified in the following format:

priming.0.user = <First user input>
priming.0.assistant = <First assistant response>
priming.1.user = <Second user input>
priming.1.assistant = <Second assistant response>
...

Example 1

Here we prime the gpt-4 to output in a specified format (INI in this case). Priming can be very useful in making the model understand the output format if the desirable output format is structured.

[ask.title]
description = Please provide a title for your story book:

[prompt.generate_topic]
model_name = gpt-4
priming.0.user = I will provide you with a title of a story and you are required to provide a topic and imager for that story. 
    The format is INI with section name `story` and two fields `topic` and `imagery`. 
    
    Say okay to continue.
priming.0.assistant = Okay.
priming.1.user =
    tile: Snow White
priming.1.assistant = [story]
    topic = Snow white goes to the beach
    imagery = Snow White on a beach playing with friends inside a sand castle.
message =
    title: {{input.title}}
display = False
output_type = ini

Last updated