Remove Duplicates takes a list and returns only unique items, eliminating any repeated values. Essential for cleaning data and ensuring accurate processing.

When to Use It

  • Clean campaign lists that may have duplicate entries
  • Remove repeated URLs from sitemap extraction
  • Deduplicate client lists from multiple sources
  • Ensure unique keywords before processing

Inputs

FieldTypeRequiredDescription
ListListYesThe list to remove duplicates from

Outputs

OutputDescription
Unique ListList with duplicate items removed

Credit Cost

Free to use - no credits required.

Real-World Examples

Clean Campaign Data:
Google Ads Get Report → Remove Duplicates → Loop Over List
Before: ["Campaign A", "Campaign B", "Campaign A", "Campaign C", "Campaign B"]
After: ["Campaign A", "Campaign B", "Campaign C"]
Deduplicate URL Lists:
Extract URLs from Sitemap → Remove Duplicates → Count List Items
"Clean extracted URLs before processing to avoid duplicate work"
Merge Client Lists:
Multiple Sheets Read Data → Combine Lists → Remove Duplicates → Write to Sheets
"Merge client lists from different sources without duplicates"
Keyword Cleaning:
Generate List (from text) → Remove Duplicates → Loop Over List
"Process unique keywords only for campaign creation"

How It Works

The node compares items and keeps only the first occurrence of each unique value: Example Process:
Input List: ["apple", "banana", "apple", "cherry", "banana", "apple"]
Processing: Keeps first "apple", first "banana", first "cherry"
Output List: ["apple", "banana", "cherry"]
Data Type Handling:
  • Text comparison is case-sensitive: “Apple” ≠ “apple”
  • Numbers are compared by value: 123 = 123.0
  • Empty values are treated as duplicates if multiple exist

Tips

Data Quality:
  • Always use this before loops to avoid processing the same item multiple times
  • Helps reduce API calls and processing time
List Merging:
  • Essential when combining data from multiple sources
  • Prevents duplicate entries in final outputs
Performance:
  • Reduces workflow execution time by eliminating redundant processing
  • Especially important for large lists with many duplicates
Case Sensitivity:
  • Remember that “Campaign A” and “campaign a” are different items
  • Consider standardizing text case before deduplication if needed

FAQ