Priming a model
LLM models like text-davinci-003
simply take a message and generate text based on the message. On the other hand, chat completion models like gpt-4
takes a chat history between user and assistant as input and generate a completion for that chat. PSL allows prompt engineers to specify the chat history via the priming
attribute supported by all of our chat completion models. Primping is optional and specified in the following format:
Example 1
Here we prime the gpt-4
to output in a specified format (INI in this case). Priming can be very useful in making the model understand the output format if the desirable output format is structured.
Last updated