Skip to main content Control how your workflows respond to failures by configuring error handling settings for any node.
How to Access
Click the three dots (⋯) on any node
Select “Execution Settings”
Configure your error handling options
Use Cases
BigQuery Reporting Pipeline:
If your Meta Ads report fails, retry 3 times with 10-second delays, but continue the workflow to still generate reports from other data sources.
API Data Collection:
When Meta Ads Get Report hits rate limits, retry 2 times with 15-second delays to wait for the API to recover.
Settings
Continue on Fail
Keep the workflow running even if this node fails. Useful for optional steps like notifications or secondary data sources.
Retry on Fail
Automatically retry the node if it fails. Good for API calls that might hit rate limits or temporary network issues.
Max Retries
How many times to retry (1-5 attempts). More retries = higher chance of success but slower execution.
Retry Delay
Wait time between retries (1-30 seconds). Longer delays help with rate limits and service recovery.
Common Configurations
For API calls:
Retry: 2-3 times
Delay: 5-10 seconds
Continue on Fail: ✓ (if optional)
For critical operations:
Retry: 4-5 times
Delay: 15-30 seconds
Continue on Fail: ✗
For notifications:
Retry: 1-2 times
Delay: 5 seconds
Continue on Fail: ✓
Tips
Start simple : 2 retries, 5-second delay
Monitor logs : Adjust based on what actually fails
Enable “Continue on Fail” for non-critical steps
Use longer delays for rate-limited APIs