AI Response

The AI Response Node enables your workflow to generate intelligent, context-aware text outputs directly from an AI model connected through your chosen AI provider. You can use it to generate summaries, recommendations, explanations, or any text-based responses using real-time workflow data.

Tabs Overview

1. Input Params

This is where you configure the core input parameters for your AI Agent.

  • Connector: Select the connected AI provider (e.g., OpenAI, GoogleAI, Anthropic, or Vertex). This defines which model and API will be used for generating responses.

  • Model Name: Choose or specify the model you want to use, such as gpt-4o-mini or any other available under your selected provider.

  • Assistant ID: In case you've connected with an Assistant account, then in the same place where of add a model name, you'll get the option to add the assistant ID.

  • Message: Write the prompt or query that will be sent to the AI model. You can also use dynamic tokens (for example: {{.LoopBlock.output[1].ruleId}}) to reference workflow data dynamically. Learn more about using tokens here: Use Tokens in the Editor

  • System Prompt (Optional): Define the behavior or role of the AI model. For example: You are a helpful AI assistant. Provide accurate, concise, and helpful responses.

  • Additional Options:

    • Add Confidence: Include a confidence score for AI outputs.

    • Format Markdown: Toggle this on to receive formatted markdown responses.

2. Test Results

This tab displays the model’s output when you test the node. The response is shown in JSON format, containing details like:

  • Processing time

  • Reference ID

  • The actual AI-generated response

You can view it in Raw, Pretty, or Table formats for better readability.

3. Settings

Use this tab to configure additional behavior for the connector and AI model.

  • Timeout for API (s): Maximum time allowed for API response.

  • Timeout for Webhook/Cron (s): Time limit for webhook or scheduled executions.

  • Continue on Error: Decide whether the workflow continues if the node fails.

  • Max Tokens: Set the maximum number of tokens (characters/words) in the AI response.

  • Temperature: Controls randomness—lower values produce consistent results, higher values produce creative outputs.

  • Max Retries: Number of retry attempts if the model fails to respond.

  • Metadata Required: Toggle on if additional metadata is needed.

  • Cache: Enable caching to reuse previously generated results.

  • Time to Expire: Define how long the cached results remain valid.

Last updated