# AI Response

The **AI Response Node** enables your workflow to generate intelligent, context-aware text outputs directly from an AI model connected through your chosen AI provider.\
You can use it to generate summaries, recommendations, explanations, or any text-based responses using real-time workflow data.

### **Tabs Overview**

**1. Input Params**

This is where you configure the core input parameters for your AI Agent.

* **Connector:**\
  Select the connected AI provider (e.g., OpenAI, GoogleAI, Anthropic, or Vertex). This defines which model and API will be used for generating responses.
* **Model Name:**\
  Choose or specify the model you want to use, such as `gpt-4o-mini` or any other available under your selected provider.
* **Assistant ID:**\
  In case you've connected with an Assistant account, then in the same place where of add a model name, you'll get the option to add the assistant ID.
* **Message:**\
  Write the prompt or query that will be sent to the AI model.\
  You can also **use dynamic tokens** (for example: `{{.LoopBlock.output[1].ruleId}}`) to reference workflow data dynamically.\
  Learn more about using tokens here: [Use Tokens in the Editor](https://docs.nected.ai/nected-docs/references/pre-configured-tokens/use-tokens-in-the-editor)
* **System Prompt (Optional):**\
  Define the behavior or role of the AI model. For example:\
  `You are a helpful AI assistant. Provide accurate, concise, and helpful responses.`
* **Additional Options:**
  * **Add Confidence:** Include a confidence score for AI outputs.
  * **Format Markdown:** Toggle this on to receive formatted markdown responses.

<figure><img src="https://4290782554-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FLg716fCfV8IUwXQygkTG%2Fuploads%2FX1ZQKTSmxqz79sJYzQ7B%2Fimage.png?alt=media&#x26;token=b5f8bd11-b135-4e51-aebd-f3804c67b380" alt=""><figcaption></figcaption></figure>

**2. Test Results**

This tab displays the model’s output when you test the node.\
The response is shown in **JSON format**, containing details like:

* Processing time
* Reference ID
* The actual AI-generated response

You can view it in **Raw**, **Pretty**, or **Table** formats for better readability.

<figure><img src="https://4290782554-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FLg716fCfV8IUwXQygkTG%2Fuploads%2F7tXpQjfzWEeLeiibUrj3%2Fimage.png?alt=media&#x26;token=0d596097-8d90-4aaf-a526-fd0623ca2e66" alt=""><figcaption></figcaption></figure>

**3. Settings**

Use this tab to configure additional behavior for the connector and AI model.

* **Timeout for API (s):** Maximum time allowed for API response.
* **Timeout for Webhook/Cron (s):** Time limit for webhook or scheduled executions.
* **Continue on Error:** Decide whether the workflow continues if the node fails.
* **Max Tokens:** Set the maximum number of tokens (characters/words) in the AI response.
* **Temperature:** Controls randomness—lower values produce consistent results, higher values produce creative outputs.
* **Max Retries:** Number of retry attempts if the model fails to respond.
* **Metadata Required:** Toggle on if additional metadata is needed.
* **Cache:** Enable caching to reuse previously generated results.
* **Time to Expire:** Define how long the cached results remain valid.

<figure><img src="https://4290782554-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FLg716fCfV8IUwXQygkTG%2Fuploads%2FroBs53z1HXGdWeymkaqN%2Fimage.png?alt=media&#x26;token=1a375ebe-d381-455b-8c0a-b549b1b93fd7" alt=""><figcaption></figcaption></figure>
