AI Providers
Modern applications often rely on large‑language models to augment decision tables and automate workflows. Nected lets you connect several third‑party AI services so you can call their APIs directly from within your rules. This guide describes how to add and configure the built‑in AI connectors available under Integrations → AI Providers. It follows the same pattern used in other integration guides and builds on the concepts of environment types and publishing that apply across the platform.
Available AI connectors
The AI providers screen displays a card for each supported model provider. At the time of writing the following connectors are available:
Financial Advisor asst
Pre‑built assistant tuned for financial analysis.
Personal
General conversation helper using a default model.
OpenAI
Access OpenAI models such as GPT‑3.5 and GPT‑4.
GoogleAI
Use Google’s Gemini family of models via the Gemini API.
Anthropic
Call Claude models using Anthropic’s API.
Vertex
Integrate with models hosted on Google Vertex AI.
SageMaker
Invoke endpoints deployed on Amazon SageMaker.
Each card indicates whether it is already connected (green “Connected” badge) or ready for configuration (blue “+ Add” button). You may attach multiple providers simultaneously and choose the appropriate one when defining a rule. For built‑in assistants (Financial Advisor asst and Personal) Nected manages the backend, so you only need to name and publish the integration. For other providers you must supply API credentials obtained from the respective platform.
Prerequisites
Before you add any AI connector, make sure you have:
Access to a Nected workspace with Integrations permissions.
An account on the chosen AI platform. This often involves signing up, confirming your email and accepting the platform’s terms. For example, Anthropic requires a new user to register at
console.anthropic.com, verify their email and describe the intended use case before generating an API key.An API key or service account credentials:
OpenAI – generate an API key from the OpenAI dashboard.
GoogleAI – create a Gemini API key from Google AI Studio. Google explains that the Gemini API uses an encrypted string known as an API key and that keys can be created and managed from the Google AI Studio API Keys page. Keys are always associated with a Google Cloud project.
Anthropic – obtain a Claude API key from the Anthropic console after your account has been approved.
Vertex AI – download a JSON key file for a service account that has permission to call Vertex AI. You also need the Project ID, Endpoint ID and Region where your model is hosted.
SageMaker – note the Endpoint Name and Region of your deployed SageMaker endpoint. LangChain’s documentation notes that you must supply these values when calling a SageMaker endpoint and that the AWS client will load credentials from your configured profiles. A valid AWS access key and secret access key for SageMaker. AWS recommends creating an IAM user rather than using root credentials, and warns that the secret access key can only be viewed at creation time. Save the secret key securely; you will need it when configuring the connector.
Understanding environment types is important. When you create any integration, Nected asks you to choose an Environment Type (Staging or Production) and provide a Name for the integration. The environment defines the context in which the connector will operate. Staging connectors cannot be used in production rules, so make sure you publish the same connector in both environments if required.
Adding an AI provider
The high‑level process for configuring any AI provider is the same:
Navigate to AI Providers. In your Nected workspace, open Integrations on the left navigation menu and select the AI Providers tab. The page lists all available AI connectors.
Select a provider. Click the card corresponding to the provider you want to connect. If the card shows a green Connected badge, it means this provider is already configured. Otherwise click the blue + Add button.
Specify environment and name. Choose Staging or Production for the Environment Type. Provide a short, descriptive name. Names must be unique across the workspace and cannot contain spaces.
Enter credentials and settings. The fields vary by provider. Details for each provider appear in the sections below.
Test the connection. After entering credentials, click Test Connection. Nected attempts to call the provider’s API using the supplied information and displays a success or failure message. Testing helps you confirm that the credentials are valid and that your network can reach the service.
Publish the integration. When the test succeeds, click Publish in Staging or Publish in Production. Publishing activates the connector in that environment. Unpublished connectors cannot be used in rules.
Verify status. After closing the configuration form, check the card’s status; it should display Connected if the connector is published and active.
Feel free to repeat these steps for multiple providers. With several connectors active, you can choose the appropriate model during rule configuration. Would you like to experiment with different models to compare their responses? Nected makes it easy to switch.
OpenAI
The OpenAI connector lets you call GPT models (for example, gpt‑3.5‑turbo or gpt‑4). To configure it:
Generate an API key. Sign in to your OpenAI account and create a new secret key from the user settings page. Copy the key to a secure location; it will not be shown again.
Open the OpenAI card and click + Add.
Select the environment and enter a Name.
Provide the API key. Paste your OpenAI secret key into the API Key field. Optionally specify the Base URL if you use a custom proxy (e.g.,
https://api.openai.com/v1for the official endpoint). Choose the Model you want to use (e.g.,gpt‑3.5‑turboorgpt‑4) from the dropdown.Configure optional parameters. Some fields allow you to set Temperature, Top‑p, Max Tokens or similar generation settings depending on the model.
Test and Publish. The test sends a simple prompt to OpenAI and verifies that a response is received. If it passes, publish the integration.
When the connector is active you can select it from the AI action in rules. Curious about how temperature affects responses? You can modify the parameter here and observe the difference in output.
GoogleAI (Gemini API)
Google’s Gemini API provides access to text‑ and multimodal models. To use it with Nected:
Create a Google Cloud project and enable the Gemini API. Google states that API keys are managed from the Google AI Studio API Keys page and that each key is associated with a Cloud project.
Generate an API key. In Google AI Studio, go to API Keys and click Create API Key. Note the key string; treat it as secret.
Open the GoogleAI card and click + Add.
Fill out the form:
Environment – select Staging or Production.
Name – choose a unique name.
Project ID – the ID of your Google Cloud project.
API Key – paste the Gemini API key you generated.
Model – choose from available models, such as
gemini‑1.0‑pro(text) orgemini‑pro‑vision(multimodal). Depending on your quota, some models may not appear.
Test. Nected uses your key to make a small completion request. A successful response indicates that your key and project are valid.
Publish. Once published, the card will show Connected.
Because Google links keys to projects, you can rotate keys without changing the project ID. Keep in mind that sharing keys publicly is insecure; treat your Gemini key the same way you treat any other secret.
Anthropic (Claude)
Anthropic’s Claude models are accessible via an API that uses a private key. Follow these steps:
Register and obtain approval. Create an account at
console.anthropic.com, verify your email and provide details about your intended use case. Account approval may take a few business days.Generate an API key. After approval, log in to the Anthropic console and navigate to API Keys. Create a new key and copy it; Anthropic only displays the secret once.
Open the Anthropic card and click + Add.
Provide details:
Environment and Name.
API Key – paste your Claude key.
Model – choose from available Claude models (e.g.,
claude‑3 haiku,claude‑3 sonnet).Max Tokens and Temperature – optional parameters controlling response length and randomness.
Test and Publish. Testing will send a short prompt to Claude. If successful, publish the integration.
Anthropic keys are tied to your organisation and may have usage limits. Make sure you abide by Anthropic’s terms when using the models.
Vertex AI
Vertex AI integrates Google’s large‑language models through endpoints you host on Google Cloud. To connect Nected to a Vertex endpoint:
Prepare your project. In the Google Cloud console, enable the Vertex AI API and deploy your model as an online prediction endpoint. You will need the Project ID, Endpoint ID and Region.
Create a service account and JSON key. In the Cloud console, go to IAM & Admin → Service Accounts. Create (or select) a service account that has the
Vertex AI Userrole. Under its Keys tab, choose Add Key → JSON to download a key file. Keep this JSON file in a secure location.Open the Vertex card and click + Add.
Fill in the fields:
Environment and Name.
Project ID – the ID of your Google Cloud project.
Endpoint ID – the identifier of the deployed endpoint.
Region – the region where the endpoint resides (e.g.,
us‑central1).Service Account Key – upload the JSON key file you downloaded.
Model – choose the model served by the endpoint, if multiple are available.
Test. Nected will use your service account to call the endpoint. If authentication is successful and the endpoint responds, the test will pass.
Publish. Once published, the card shows Connected.
Should you ever update your model or redeploy the endpoint, remember to adjust the Endpoint ID and re‑test the connection.
AWS SageMaker
SageMaker lets you host custom models behind an HTTP endpoint. To connect it with Nected:
Deploy your model on Amazon SageMaker and note the Endpoint Name and the Region. As the LangChain documentation explains, you must supply these two values to use a SageMaker endpoint. Endpoints are unique within each AWS region.
Create or identify an IAM user with permission to invoke the endpoint. AWS recommends using IAM users rather than root credentials and notes that secret access keys can only be viewed at creation time. Save the Access Key ID and Secret Access Key safely.
Open the SageMaker card and click + Add.
Enter configuration details:
Environment and Name.
Endpoint Name – the name of the SageMaker endpoint.
Region – the AWS region where the endpoint is deployed (for example,
us‑west‑2).Access Key ID and Secret Access Key – your IAM credentials.
Optional Profile Name – if you use named profiles in your AWS credentials file, specify the profile here.
Optional Content Type – the MIME type expected by your model (e.g.,
application/json).
Test. Nected will call the SageMaker endpoint using the provided credentials. Make sure the IAM user has the right policies to invoke the endpoint.
Publish. After a successful test, publish the integration to activate it.
If you change your AWS keys or update the endpoint, edit the connector and run the test again. Do you rotate your AWS keys regularly? This connector supports updating credentials without recreating it.
Wrapping up
Connecting AI providers enables rich generative capabilities across your Nected rules and workflows. Although each provider has its own authentication requirements, the process within Nected remains consistent: choose a provider, pick an environment, supply the right credentials, test, and publish. By following this guide and keeping your API keys secure, you can start building AI‑powered logic with confidence.
What could you build once you have multiple models at your disposal? Perhaps an automated support agent that switches between providers depending on the query, or a decision table that calls a financial assistant only when the input contains monetary values. The flexibility of Nected’s AI integrations lets you experiment and innovate without leaving the platform.
Last updated