Chaining Sections

Using response from a prompt in a subsequent prompt.

Prompts are specified in the sections of the form [prompt.<prompt_section>]. Responses from previous prompts can be used as arguments in subsequent prompts by using the format {{response.prompt_name}}.

Example 1

[ask.category]
description = Please provide a category from which you would like the AI to choose a random personality from. For example: US Basketball players

; Use priming to make GPT output personality info as a valid INI
; User input `category` is used in the message to GPT which is an example of chaining.
[prompt.get_personality]
model_name = gpt-3.5-turbo
priming.0.user = Our users want to play taboo against wikipedia articles. They will have five questions to guess the personality from their Wikipedia articles.
    You will be provided with a category. Please provide a personality for that category. Make it interesting. For example, Lionel Messy for soccer is not very challenging got users.
    Provide your answer in the following `ini` format which has exactly three fields `name`, `reason`, `hint`. The reason should not be more than a couple of sentences, and hint should not be spoiler.
    [personality]
    name = <name>
    reason = <reason for picking the personality with the category as context>
    hint = <a hint to help the user guess the personality without spoiling the game>

    Say okay to continue
priming.0.assistant = Okay.
priming.1.user =
    category: historical records
priming.1.assistant = [personality]
    name = Tsutomu Yamaguchi
    reason = Tsutomu Yamaguchi is an interesting personality in the "historical records" category. He is known for surviving both the Hiroshima and Nagasaki atomic bombings during World War II, making him a unique historical figure.
    hint = The personality also fits in the following categories: "World War II", "Japanese personalities"
message =
    category: {{input.category}}
output_type = ini
display = False

; Use the personality name to fetch their wiki article for QnA.
; This using of previous response is chaining.
[prompt.chain]
model_name = langchain_document_loader
document_type = wikipedia
chain_type = ConversationalRetrievalChain
query = {{response.get_personality.personality.name}}
display = False

Here the response of the first prompt section [prompt.personality] is used in prompt section [prompt.chain] in the query attribute that our langchain_document_loader model supports. Note here also that response.personality is not simply a string but an objectified version of ini (output type for the previous font is ini).

Example 2

Continuing the previous example, one can use response from [prompt.chain] in a subsequent prompt like this:

[ask.guess_1]
description = Please provide your first question (or guess) for the personality.

[prompt.guess_1]
model_name = identity
message = {{response.chain.ask(input.guess_1)}}
display = False

Last updated