Add a Custom Prompt
Prerequisites
Integrate a pre-built or custom LLM before creating a prompt. See LLM Integration.Steps
- Go to Generative AI Tools > Prompts Library.
- Click + New Prompt (top right).
-
Enter the Prompt Name, then select the Feature and Model.
.png?fit=max&auto=format&n=hJZ2K-WHeb6jVugE&q=85&s=0eedc21f5a2215fcb18716d347bad717)
-
The Configuration section (endpoint URL, auth, headers) is auto-populated from the model integration and is read-only.
.png?fit=max&auto=format&n=hJZ2K-WHeb6jVugE&q=85&s=c9cfe14ca5527a4f4d95cf648b43f4c5)
-
In the Request section, create a prompt or import an existing one.
To import an existing prompt:.png?fit=max&auto=format&n=hJZ2K-WHeb6jVugE&q=85&s=c4d276f15ea0413ecb679e421d89c94a)
-
Click Import from Prompts and Requests Library.

-
Select the Feature, Model, and Prompt. Hover and click Preview Prompt to review before importing.
You can interchange prompts between features.
- Click Confirm to import the prompt into the JSON body.
.png?fit=max&auto=format&n=hJZ2K-WHeb6jVugE&q=85&s=1b1f04fe469611a926a33e65467a69ff)
-
Click Import from Prompts and Requests Library.
-
(Optional) Toggle Stream Response to enable streaming. Responses are sent incrementally in real time instead of waiting for the full response.

- Add
"stream": trueto the custom prompt when streaming is enabled. The saved prompt displays a “streaming” tag. - Enabling streaming disables the “Exit Scenario” field. Streaming applies only to Agent Node and Prompt Node features using OpenAI and Azure OpenAI models.
-
Fill in the Sample Context Values and click Test. If successful, the LLM response is displayed; otherwise an error appears.
.png?fit=max&auto=format&n=hJZ2K-WHeb6jVugE&q=85&s=17845d04cba3e59268f0719a5cd585d0)
-
Map the response key: In the JSON response, double-click the key that holds the relevant information (e.g.,
content). The Platform generates a Response Path for that location. Click Save.
-
Click Lookup Path to validate the path.

-
Review the Actual Response and Expected Response:
-
Green (match): Click Save. Skip to step 12.
.png?fit=max&auto=format&n=hJZ2K-WHeb6jVugE&q=85&s=695294cadb65b4b01d9e0a267c50e28e)
-
Red (mismatch): Click Configure to open the Post Processor Script editor.
.png?fit=max&auto=format&n=hJZ2K-WHeb6jVugE&q=85&s=0e2439f80da32e5a615844f72956551f)
-
Enter the Post Processor Script and click Save & Test.
.png?fit=max&auto=format&n=hJZ2K-WHeb6jVugE&q=85&s=997f33eaa1cf9ff2ae956fade68ba1d1)
-
Verify the result, then click Save. The responses turn green.
.png?fit=max&auto=format&n=hJZ2K-WHeb6jVugE&q=85&s=0f62e4162934dd42ec279c46d94907eb)
-
Enter the Post Processor Script and click Save & Test.
-
Green (match): Click Save. Skip to step 12.
-
(Optional) If Token Usage Limits are enabled for your custom model, map the token keys for accurate tracking:
- Request Tokens key:
usage.input_tokens - Response Tokens key:
usage.output_tokens
Without this mapping, the Platform can’t calculate token consumption, which may lead to untracked usage and unexpected costs. - Request Tokens key:
- Click Save. The prompt appears in the Prompts Library.