More on Prompts

Fully Specifications of Prompt Sections

Prompts are characterized by unique names that start with prompt., such as [prompt.generate_joke] or [prompt.write_recipe].

Each prompt section can have the following attributes:

Core Attributes

  • model_name: (Optional) Dictates the AI model used for the prompt. In its absence, the value from the [default] section or the built-in default model (identity) is chosen.

  • description: (Optional) A brief overview of new user input fields used in the prompt. For every input field, provide a description in the first prompt where it appears. The format is:

    description.{input_field} = Description text | input_data_type

    Specifying the data type is optional, with the default being str. Use the | delimiter only when pinpointing the data type. Typically ask section should be used for user input unless using a attribute supported only by prompt sections (currently continue_if and break).

  • display: (Optional, defaults to True) If set to True, the model's response for that specific prompt is shown to the user. Otherwise, it remains hidden.

  • display_option: (Optional) Denotes the mode of response display (e.g., "text", "image", "video", "markdown", "audio" etc.). In its absence, "text" is chosen.

  • output_type: (Optional) Stipulates the typecasting method for the response (e.g., "json", "csv", "ini", etc). Leveraging structured output epitomizes chatGPT's capabilities! Output type should not be specified unless you are type casting (treating raw LLM output as an "json", "ini" or other structured outputs.

  • continue_if: (Optional, Boolean) Dictates the condition to continue with the subsequent prompt. If set to True, the workflow advances to the next step based on the defined condition.

  • break: (Optional, Boolean) Represents a condition where the workflow should halt. If set to True, the workflow ceases based on the set condition.

  • change_page (Optional, Boolean) If set to True, a new page is started on the app. Typically, next_page section should be used for changing the page unless using a attribute supported only by prompt sections (currently continue_if and break).

Model-Specific Attributes

Models can bring in their unique attributes. For instance, chat models recognize attributes like system_role, message, and priming.

Example 1

An example prompt section that uses a user input called topic and sends a message to gpt-3.5-turbo to generate a hilarious joke on that topic.

[ask.topic]
description = Enter a topic which you find hilarious.

[prompt.joke_generator]
model_name = gpt-3.5-turbo
message = Tell me a hilarious joke about {{input.topic}}.

Example 2

An example of a prompt section that uses two user inputs dish_name and number_of_people and sends a message to gpt-4 to generate a recipe based on inputs.

[ask.dish_name]
description = Please enter the dish you would like to cook:

[ask.number_of_people]
description = Please enter the number of people you would like to cook for:

[prompt.detailed_recipe]
model_name = gpt-4
display = True
display_option = text
message = Provide a detailed recipe for making {{input.dish_name}} that serves {{input.number_of_people}} people.

Example 3

An example of a prompt section that takes a user input genre and sends a message to StabilityAI's stable-diffusion-xl-beta-v2-2-2 to create an image of Taj Mahal in that genre.

[prompt.generate_image_2]
continue_if = not {{input.want_second_image_of_tajmahal}}
model_name = stable-diffusion-xl-beta-v2-2-2
description.genre_2 = Provide a image genre (for example, impressionist) that you would like to use:
message = Taj Mahal image in genre: {{input.genre_2}}
display_option = image

In this section, taking of user input genre_2 is conditional on whether user want the second image. In such a case, inline user input inside a prompt section is needed as ask section currently doesn't support conditional (this will change soon).

Last updated