Database Node
Last updated
Last updated
Database nodes enable interaction with different databases, allowing for a range of operations including data retrieval, data insertion, updates, and execution of stored procedures. Each type of database node—MongoDB, PostgreSQL, MySQL, Redshift, and MS SQL Server—has unique settings to configure, reflecting the distinct features and capabilities of the respective database systems.
Within the workflow canvas, you would typically click the '+' or 'Add Node' button to reveal the list of nodes and select the type of database node you need—be it MongoDB, PostgreSQL, MySQL, Redshift, etc.
Choose the database node corresponding to the database you wish to interact with MongoDB, PostgreSQL, MySQL, Redshift, REST API, or MS SQL Server.
Then from the available options, choose any Database you want to use as the node(for example PostGres). Then click on that option (like Postgres) and it will show the available databases like this:
Now, here either you can use the prebuid datasets, or you can create a new one.
If you want to choose the prebuild dataset, you can just click the dataset from the given list, and then a new modal will be opened like this:
Inside this editor modal, you can write queries to customize the action. You can also edit the dataset by clicking on the pen🖊️ icon.
Now if you want to create a new dataset, then you can do that from this same screen itself. You just need to click on the dropdown button and then click on "+Create Connector".
This will redirect you directly to the connectors page where you can configure your connector. And after configuring, come back to this same editor, and here you can click the refresh ↻ button. Then you can see the connector in the dropdown list.
You can also create a new dataset directly inside the rule editor. If you don't want to go to the dataset editor and want to create a new dataset connector, just click on the "+Database_Name" button which will take you to the connectors page with the popup for configuring the specific database with proper credentials.
Next, configure the connection settings for your database, including host, port, username, password, and other required details.
After filling in all the connector details click on the Test Connection
and check if the connector is working properly or not i.e. credentials are correct for your connectors. And after that publish the connection in staging first by clicking "Publish in Staging" and then repeat the same thing in production. Then finally publish it in production by clicking on Publish in Production
button.
Now once you've successfully established a connection, come back to the workflow editor and click the refresh ↻ button. Then you'll be able to see the new connector. Then simply click on the connector Then again the dataset editor will be opened. where you can now configure the database operations.
Once you've added the Database node, you can now configure the database node by following the below steps:
The database node has 3 tabs:
Input Params tab
Test Result
Settings
Inside the input params tab, you'll get a couple of options to configure the database nodes.
Action Dropdown: There are two types of query actions that you can perform inside the database node:
SELECT Query: You can run the general select query to select one or many attributes of the database. In case you want to just show one or more attributes if the previous nodes are successfully executed, then you can use the SELECT query. This can be done by writing a SQL query for relational databases like PostgreSQL and MySQL, or a BSON query for MongoDB.
INSERT or UPDATE Query: You can also insert new values to the database or define the criteria for the records that need to be updated and specify the new values. Again, this will be in the form of a SQL or BSON query, depending on the database. Please read about supported/non-supported queries from respective DB pages.
Integration Dropdown: In this option, you can select the database you intend to interact with from a list of preconfigured database integrations. You can also edit the database connector from the pen🖊️ button just at the right side of the dropdown.
Schema tab: Inside the schema section, you can access all the schema from the database that you're using. It helps you by giving you the database attributes as schema which you can use and create queries based on that.
Once you've configured your database, you can click on the "Test" button and then it will be executed and in the "Test Results" tab you'll be able to see the output of the query.
This helps you to observe the results or any errors that may occur, which will help in troubleshooting and validation.
Adjust additional configurations for how the database node will operate within your workflow.
Timeout Setting:
Set a duration after which the database operation should time out if not completed.
If you turn this setting on, then even if the database node gets any error, the workflow will continue to execute the next node with the error.
Let's take an example:
Let's say we have these 3 nodes in our workflow:
The first one is a Formula node, the second one is a Rule node and the third one is a database node.
In the past, our workflow system was designed with a stringent error management approach. If any node within the workflow encountered an error at any point during its execution, the entire process would come to a complete halt. This meant that if the first node, for example, "modified_cart," experienced any error, the workflow would not proceed to the next node. Instead, it would stop immediately at the first node, requiring immediate intervention to address the error before the workflow could continue. This approach prioritized error verification and strict control over processes, but it also meant increased interruptions and potential delays as errors were resolved one at a time.
Our current system introduces a feature called "Continue on error," which can be toggled on or off for each node in the workflow. When this setting is activated, it allows the workflow to proceed to the next node even if an error occurs in the current node. Specifically, if the first node, like "modified_cart," encounters an error while this setting is enabled, the workflow will not be halted. Instead, it will advance to the subsequent node, such as the rule node, with the error details passed along from the previous node. This functionality facilitates the continuation of the workflow despite errors, promoting greater flexibility and resilience in managing workflows by allowing for potential error handling at later stages of the process.
Define the maximum number of rows to be processed during the database operation within the workflow. The default value is 1000, i.e. it can take up to 1,000 rows at a time.
In the settings tab, you can see the Caching option like this:
Once you toggle this option, the node will use the "cached" data from the database. This means that even if the database is updated in the backend, still the workflow will use the cached version of the database until the "Time to expire". After the time mentioned there, expires then the node will start using the real data from the original database.
The "Time to Expire" setting is a crucial parameter within the "Cache" configuration. It determines how long a cached value of a node will remain valid before it needs refreshing or updating. This setting is designed to optimize the use of cached data, enabling efficient data retrieval and minimizing redundant processing. The "Time to Expire" can be fine-tuned to various measurement units such as Days, Hours, and even Minutes, providing flexibility and control over the cache management strategy. Properly configuring this setting ensures that the cache remains fresh and relevant, improving application performance
Now, if you want to create a new database and integrate that into your workflow as a DB node, then you can do that too from this dropdown. If you look this carefully, you can see that there is an option called "+Create Connector" at the end of the available database options. The steps are already mentioned earlier in the previous section.