Skip to main content
Remove Duplicates takes a list and returns only unique items, eliminating any repeated values. Essential for cleaning data and ensuring accurate processing.

When to Use It

  • Clean campaign lists that may have duplicate entries
  • Remove repeated URLs from sitemap extraction
  • Deduplicate client lists from multiple sources
  • Ensure unique keywords before processing

Inputs

FieldTypeRequiredDescription
ListListYesThe list to remove duplicates from

Outputs

OutputDescription
Unique ListList with duplicate items removed

Credit Cost

Free to use - no credits required.

Real-World Examples

Clean Campaign Data:
Google Ads Get Report → Remove Duplicates → Loop Over List
Before: ["Campaign A", "Campaign B", "Campaign A", "Campaign C", "Campaign B"]
After: ["Campaign A", "Campaign B", "Campaign C"]
Deduplicate URL Lists:
Extract URLs from Sitemap → Remove Duplicates → Count List Items
"Clean extracted URLs before processing to avoid duplicate work"
Merge Client Lists:
Multiple Sheets Read Data → Combine Lists → Remove Duplicates → Write to Sheets
"Merge client lists from different sources without duplicates"
Keyword Cleaning:
Generate List (from text) → Remove Duplicates → Loop Over List
"Process unique keywords only for campaign creation"

How It Works

The node compares items and keeps only the first occurrence of each unique value: Example Process:
Input List: ["apple", "banana", "apple", "cherry", "banana", "apple"]
Processing: Keeps first "apple", first "banana", first "cherry"
Output List: ["apple", "banana", "cherry"]
Data Type Handling:
  • Text comparison is case-sensitive: “Apple” ≠ “apple”
  • Numbers are compared by value: 123 = 123.0
  • Empty values are treated as duplicates if multiple exist

Tips

Data Quality:
  • Always use this before loops to avoid processing the same item multiple times
  • Helps reduce API calls and processing time
List Merging:
  • Essential when combining data from multiple sources
  • Prevents duplicate entries in final outputs
Performance:
  • Reduces workflow execution time by eliminating redundant processing
  • Especially important for large lists with many duplicates
Case Sensitivity:
  • Remember that “Campaign A” and “campaign a” are different items
  • Consider standardizing text case before deduplication if needed

FAQ

Yes, the node keeps items in their original order. It removes duplicates but maintains the sequence of the first occurrence of each unique item.
The comparison is case-sensitive, so “Apple” and “apple” are treated as different items. If you need case-insensitive deduplication, standardize the text case first.
You’ll get a much shorter list with only unique items. For example, [“A”, “A”, “A”, “B”, “B”] becomes [“A”, “B”].
Yes, but it compares the entire data structure. Two campaign objects are only considered duplicates if every field matches exactly. For partial matching, use other filtering methods.
Use Count List Items before and after Remove Duplicates, then subtract to see how many duplicates were found. This helps assess data quality.
Yes, it works with any data type. Numbers are compared by value (123 equals 123), text by exact match, and mixed lists handle each type appropriately.
I